00:00:00.001 Started by upstream project "autotest-per-patch" build number 132334 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.063 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.064 The recommended git tool is: git 00:00:00.064 using credential 00000000-0000-0000-0000-000000000002 00:00:00.069 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.138 Fetching changes from the remote Git repository 00:00:00.140 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.226 Using shallow fetch with depth 1 00:00:00.226 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.226 > git --version # timeout=10 00:00:00.313 > git --version # 'git version 2.39.2' 00:00:00.313 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.373 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.373 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.938 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.952 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.967 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.967 > git config core.sparsecheckout # timeout=10 00:00:04.980 > git read-tree -mu HEAD # timeout=10 00:00:04.998 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.026 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.026 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.115 [Pipeline] Start of Pipeline 00:00:05.128 [Pipeline] library 00:00:05.130 Loading library shm_lib@master 00:00:05.130 Library shm_lib@master is cached. Copying from home. 00:00:05.145 [Pipeline] node 00:00:20.147 Still waiting to schedule task 00:00:20.147 Waiting for next available executor on ‘vagrant-vm-host’ 00:19:53.687 Running on VM-host-SM4 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:19:53.689 [Pipeline] { 00:19:53.702 [Pipeline] catchError 00:19:53.704 [Pipeline] { 00:19:53.720 [Pipeline] wrap 00:19:53.730 [Pipeline] { 00:19:53.739 [Pipeline] stage 00:19:53.741 [Pipeline] { (Prologue) 00:19:53.761 [Pipeline] echo 00:19:53.762 Node: VM-host-SM4 00:19:53.769 [Pipeline] cleanWs 00:19:53.791 [WS-CLEANUP] Deleting project workspace... 00:19:53.791 [WS-CLEANUP] Deferred wipeout is used... 00:19:53.802 [WS-CLEANUP] done 00:19:54.013 [Pipeline] setCustomBuildProperty 00:19:54.137 [Pipeline] httpRequest 00:19:54.453 [Pipeline] echo 00:19:54.455 Sorcerer 10.211.164.20 is alive 00:19:54.467 [Pipeline] retry 00:19:54.470 [Pipeline] { 00:19:54.486 [Pipeline] httpRequest 00:19:54.491 HttpMethod: GET 00:19:54.492 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:19:54.493 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:19:54.494 Response Code: HTTP/1.1 200 OK 00:19:54.495 Success: Status code 200 is in the accepted range: 200,404 00:19:54.496 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:19:54.642 [Pipeline] } 00:19:54.662 [Pipeline] // retry 00:19:54.672 [Pipeline] sh 00:19:54.954 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:19:54.971 [Pipeline] httpRequest 00:19:55.283 [Pipeline] echo 00:19:55.285 Sorcerer 10.211.164.20 is alive 00:19:55.295 [Pipeline] retry 00:19:55.297 [Pipeline] { 00:19:55.314 [Pipeline] httpRequest 00:19:55.319 HttpMethod: GET 00:19:55.320 URL: http://10.211.164.20/packages/spdk_57b682926e45ec151052477d80f65bc81bd1ab2b.tar.gz 00:19:55.321 Sending request to url: http://10.211.164.20/packages/spdk_57b682926e45ec151052477d80f65bc81bd1ab2b.tar.gz 00:19:55.322 Response Code: HTTP/1.1 200 OK 00:19:55.322 Success: Status code 200 is in the accepted range: 200,404 00:19:55.323 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_57b682926e45ec151052477d80f65bc81bd1ab2b.tar.gz 00:19:59.286 [Pipeline] } 00:19:59.304 [Pipeline] // retry 00:19:59.312 [Pipeline] sh 00:19:59.592 + tar --no-same-owner -xf spdk_57b682926e45ec151052477d80f65bc81bd1ab2b.tar.gz 00:20:02.889 [Pipeline] sh 00:20:03.165 + git -C spdk log --oneline -n5 00:20:03.165 57b682926 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:20:03.165 3b58329b1 bdev: Use data_block_size for upper layer buffer if no_metadata is true 00:20:03.165 9b64b1304 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:20:03.165 95f6a056e bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:20:03.165 a38267915 bdev: Locate all hot data in spdk_bdev_desc to the first cache line 00:20:03.180 [Pipeline] writeFile 00:20:03.192 [Pipeline] sh 00:20:03.470 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:20:03.482 [Pipeline] sh 00:20:03.760 + cat autorun-spdk.conf 00:20:03.760 SPDK_RUN_FUNCTIONAL_TEST=1 00:20:03.760 SPDK_TEST_NVMF=1 00:20:03.760 SPDK_TEST_NVMF_TRANSPORT=tcp 00:20:03.760 SPDK_TEST_USDT=1 00:20:03.760 SPDK_TEST_NVMF_MDNS=1 00:20:03.760 SPDK_RUN_UBSAN=1 00:20:03.760 NET_TYPE=virt 00:20:03.760 SPDK_JSONRPC_GO_CLIENT=1 00:20:03.760 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:20:03.766 RUN_NIGHTLY=0 00:20:03.767 [Pipeline] } 00:20:03.780 [Pipeline] // stage 00:20:03.797 [Pipeline] stage 00:20:03.798 [Pipeline] { (Run VM) 00:20:03.810 [Pipeline] sh 00:20:04.088 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:20:04.088 + echo 'Start stage prepare_nvme.sh' 00:20:04.088 Start stage prepare_nvme.sh 00:20:04.088 + [[ -n 8 ]] 00:20:04.088 + disk_prefix=ex8 00:20:04.088 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:20:04.088 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:20:04.088 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:20:04.088 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:20:04.088 ++ SPDK_TEST_NVMF=1 00:20:04.088 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:20:04.088 ++ SPDK_TEST_USDT=1 00:20:04.088 ++ SPDK_TEST_NVMF_MDNS=1 00:20:04.088 ++ SPDK_RUN_UBSAN=1 00:20:04.088 ++ NET_TYPE=virt 00:20:04.088 ++ SPDK_JSONRPC_GO_CLIENT=1 00:20:04.088 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:20:04.088 ++ RUN_NIGHTLY=0 00:20:04.088 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:20:04.088 + nvme_files=() 00:20:04.088 + declare -A nvme_files 00:20:04.088 + backend_dir=/var/lib/libvirt/images/backends 00:20:04.089 + nvme_files['nvme.img']=5G 00:20:04.089 + nvme_files['nvme-cmb.img']=5G 00:20:04.089 + nvme_files['nvme-multi0.img']=4G 00:20:04.089 + nvme_files['nvme-multi1.img']=4G 00:20:04.089 + nvme_files['nvme-multi2.img']=4G 00:20:04.089 + nvme_files['nvme-openstack.img']=8G 00:20:04.089 + nvme_files['nvme-zns.img']=5G 00:20:04.089 + (( SPDK_TEST_NVME_PMR == 1 )) 00:20:04.089 + (( SPDK_TEST_FTL == 1 )) 00:20:04.089 + (( SPDK_TEST_NVME_FDP == 1 )) 00:20:04.089 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:20:04.089 + for nvme in "${!nvme_files[@]}" 00:20:04.089 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:20:04.089 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:20:04.089 + for nvme in "${!nvme_files[@]}" 00:20:04.089 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:20:04.089 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:20:04.089 + for nvme in "${!nvme_files[@]}" 00:20:04.089 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:20:04.089 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:20:04.089 + for nvme in "${!nvme_files[@]}" 00:20:04.089 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:20:04.089 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:20:04.089 + for nvme in "${!nvme_files[@]}" 00:20:04.089 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:20:04.089 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:20:04.089 + for nvme in "${!nvme_files[@]}" 00:20:04.089 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:20:04.089 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:20:04.089 + for nvme in "${!nvme_files[@]}" 00:20:04.089 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:20:04.346 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:20:04.346 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:20:04.346 + echo 'End stage prepare_nvme.sh' 00:20:04.346 End stage prepare_nvme.sh 00:20:04.356 [Pipeline] sh 00:20:04.633 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:20:04.633 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -H -a -v -f fedora39 00:20:04.633 00:20:04.633 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:20:04.633 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:20:04.633 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:20:04.633 HELP=0 00:20:04.633 DRY_RUN=0 00:20:04.633 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img, 00:20:04.633 NVME_DISKS_TYPE=nvme,nvme, 00:20:04.633 NVME_AUTO_CREATE=0 00:20:04.633 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img, 00:20:04.633 NVME_CMB=,, 00:20:04.633 NVME_PMR=,, 00:20:04.633 NVME_ZNS=,, 00:20:04.633 NVME_MS=,, 00:20:04.633 NVME_FDP=,, 00:20:04.633 SPDK_VAGRANT_DISTRO=fedora39 00:20:04.633 SPDK_VAGRANT_VMCPU=10 00:20:04.633 SPDK_VAGRANT_VMRAM=12288 00:20:04.633 SPDK_VAGRANT_PROVIDER=libvirt 00:20:04.633 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:20:04.633 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:20:04.633 SPDK_OPENSTACK_NETWORK=0 00:20:04.633 VAGRANT_PACKAGE_BOX=0 00:20:04.633 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:20:04.633 FORCE_DISTRO=true 00:20:04.633 VAGRANT_BOX_VERSION= 00:20:04.633 EXTRA_VAGRANTFILES= 00:20:04.633 NIC_MODEL=e1000 00:20:04.633 00:20:04.633 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:20:04.633 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:20:07.961 Bringing machine 'default' up with 'libvirt' provider... 00:20:08.894 ==> default: Creating image (snapshot of base box volume). 00:20:08.894 ==> default: Creating domain with the following settings... 00:20:08.894 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732080688_65807c393bfd5c0bf625 00:20:08.894 ==> default: -- Domain type: kvm 00:20:08.894 ==> default: -- Cpus: 10 00:20:08.894 ==> default: -- Feature: acpi 00:20:08.894 ==> default: -- Feature: apic 00:20:08.894 ==> default: -- Feature: pae 00:20:08.894 ==> default: -- Memory: 12288M 00:20:08.894 ==> default: -- Memory Backing: hugepages: 00:20:08.894 ==> default: -- Management MAC: 00:20:08.894 ==> default: -- Loader: 00:20:08.894 ==> default: -- Nvram: 00:20:08.894 ==> default: -- Base box: spdk/fedora39 00:20:08.894 ==> default: -- Storage pool: default 00:20:08.894 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732080688_65807c393bfd5c0bf625.img (20G) 00:20:08.894 ==> default: -- Volume Cache: default 00:20:08.894 ==> default: -- Kernel: 00:20:08.894 ==> default: -- Initrd: 00:20:08.894 ==> default: -- Graphics Type: vnc 00:20:08.894 ==> default: -- Graphics Port: -1 00:20:08.894 ==> default: -- Graphics IP: 127.0.0.1 00:20:08.894 ==> default: -- Graphics Password: Not defined 00:20:08.894 ==> default: -- Video Type: cirrus 00:20:08.894 ==> default: -- Video VRAM: 9216 00:20:08.894 ==> default: -- Sound Type: 00:20:08.894 ==> default: -- Keymap: en-us 00:20:08.894 ==> default: -- TPM Path: 00:20:08.894 ==> default: -- INPUT: type=mouse, bus=ps2 00:20:08.894 ==> default: -- Command line args: 00:20:08.894 ==> default: -> value=-device, 00:20:08.894 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:20:08.894 ==> default: -> value=-drive, 00:20:08.894 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-0-drive0, 00:20:08.894 ==> default: -> value=-device, 00:20:08.894 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:20:08.894 ==> default: -> value=-device, 00:20:08.894 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:20:08.894 ==> default: -> value=-drive, 00:20:08.894 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:20:08.894 ==> default: -> value=-device, 00:20:08.894 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:20:08.894 ==> default: -> value=-drive, 00:20:08.894 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:20:08.894 ==> default: -> value=-device, 00:20:08.894 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:20:08.894 ==> default: -> value=-drive, 00:20:08.894 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:20:08.894 ==> default: -> value=-device, 00:20:08.894 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:20:09.152 ==> default: Creating shared folders metadata... 00:20:09.152 ==> default: Starting domain. 00:20:11.079 ==> default: Waiting for domain to get an IP address... 00:20:25.990 ==> default: Waiting for SSH to become available... 00:20:27.369 ==> default: Configuring and enabling network interfaces... 00:20:32.683 default: SSH address: 192.168.121.101:22 00:20:32.683 default: SSH username: vagrant 00:20:32.683 default: SSH auth method: private key 00:20:34.609 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:20:44.601 ==> default: Mounting SSHFS shared folder... 00:20:45.235 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:20:45.235 ==> default: Checking Mount.. 00:20:46.613 ==> default: Folder Successfully Mounted! 00:20:46.613 ==> default: Running provisioner: file... 00:20:47.550 default: ~/.gitconfig => .gitconfig 00:20:48.155 00:20:48.155 SUCCESS! 00:20:48.155 00:20:48.155 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:20:48.155 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:20:48.155 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:20:48.155 00:20:48.164 [Pipeline] } 00:20:48.179 [Pipeline] // stage 00:20:48.188 [Pipeline] dir 00:20:48.188 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:20:48.190 [Pipeline] { 00:20:48.202 [Pipeline] catchError 00:20:48.204 [Pipeline] { 00:20:48.220 [Pipeline] sh 00:20:48.500 + vagrant ssh-config --host vagrant 00:20:48.500 + sed -ne /^Host/,$p 00:20:48.500 + tee ssh_conf 00:20:52.688 Host vagrant 00:20:52.688 HostName 192.168.121.101 00:20:52.688 User vagrant 00:20:52.688 Port 22 00:20:52.688 UserKnownHostsFile /dev/null 00:20:52.688 StrictHostKeyChecking no 00:20:52.688 PasswordAuthentication no 00:20:52.688 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:20:52.688 IdentitiesOnly yes 00:20:52.688 LogLevel FATAL 00:20:52.688 ForwardAgent yes 00:20:52.688 ForwardX11 yes 00:20:52.688 00:20:52.702 [Pipeline] withEnv 00:20:52.705 [Pipeline] { 00:20:52.719 [Pipeline] sh 00:20:53.002 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:20:53.002 source /etc/os-release 00:20:53.002 [[ -e /image.version ]] && img=$(< /image.version) 00:20:53.002 # Minimal, systemd-like check. 00:20:53.002 if [[ -e /.dockerenv ]]; then 00:20:53.002 # Clear garbage from the node's name: 00:20:53.002 # agt-er_autotest_547-896 -> autotest_547-896 00:20:53.002 # $HOSTNAME is the actual container id 00:20:53.002 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:20:53.002 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:20:53.002 # We can assume this is a mount from a host where container is running, 00:20:53.002 # so fetch its hostname to easily identify the target swarm worker. 00:20:53.002 container="$(< /etc/hostname) ($agent)" 00:20:53.002 else 00:20:53.002 # Fallback 00:20:53.002 container=$agent 00:20:53.002 fi 00:20:53.002 fi 00:20:53.002 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:20:53.002 00:20:53.274 [Pipeline] } 00:20:53.290 [Pipeline] // withEnv 00:20:53.299 [Pipeline] setCustomBuildProperty 00:20:53.315 [Pipeline] stage 00:20:53.317 [Pipeline] { (Tests) 00:20:53.334 [Pipeline] sh 00:20:53.614 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:20:53.887 [Pipeline] sh 00:20:54.167 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:20:54.443 [Pipeline] timeout 00:20:54.443 Timeout set to expire in 1 hr 0 min 00:20:54.445 [Pipeline] { 00:20:54.460 [Pipeline] sh 00:20:54.790 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:20:55.356 HEAD is now at 57b682926 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:20:55.369 [Pipeline] sh 00:20:55.653 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:20:55.926 [Pipeline] sh 00:20:56.210 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:20:56.485 [Pipeline] sh 00:20:56.770 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:20:57.029 ++ readlink -f spdk_repo 00:20:57.029 + DIR_ROOT=/home/vagrant/spdk_repo 00:20:57.029 + [[ -n /home/vagrant/spdk_repo ]] 00:20:57.029 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:20:57.029 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:20:57.029 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:20:57.029 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:20:57.029 + [[ -d /home/vagrant/spdk_repo/output ]] 00:20:57.029 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:20:57.029 + cd /home/vagrant/spdk_repo 00:20:57.029 + source /etc/os-release 00:20:57.029 ++ NAME='Fedora Linux' 00:20:57.029 ++ VERSION='39 (Cloud Edition)' 00:20:57.029 ++ ID=fedora 00:20:57.029 ++ VERSION_ID=39 00:20:57.029 ++ VERSION_CODENAME= 00:20:57.029 ++ PLATFORM_ID=platform:f39 00:20:57.029 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:20:57.029 ++ ANSI_COLOR='0;38;2;60;110;180' 00:20:57.029 ++ LOGO=fedora-logo-icon 00:20:57.029 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:20:57.029 ++ HOME_URL=https://fedoraproject.org/ 00:20:57.029 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:20:57.029 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:20:57.029 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:20:57.029 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:20:57.029 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:20:57.029 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:20:57.029 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:20:57.029 ++ SUPPORT_END=2024-11-12 00:20:57.029 ++ VARIANT='Cloud Edition' 00:20:57.029 ++ VARIANT_ID=cloud 00:20:57.029 + uname -a 00:20:57.029 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:20:57.029 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:20:57.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:57.288 Hugepages 00:20:57.288 node hugesize free / total 00:20:57.625 node0 1048576kB 0 / 0 00:20:57.625 node0 2048kB 0 / 0 00:20:57.625 00:20:57.625 Type BDF Vendor Device NUMA Driver Device Block devices 00:20:57.625 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:20:57.625 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:20:57.625 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:20:57.625 + rm -f /tmp/spdk-ld-path 00:20:57.625 + source autorun-spdk.conf 00:20:57.625 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:20:57.625 ++ SPDK_TEST_NVMF=1 00:20:57.625 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:20:57.625 ++ SPDK_TEST_USDT=1 00:20:57.625 ++ SPDK_TEST_NVMF_MDNS=1 00:20:57.625 ++ SPDK_RUN_UBSAN=1 00:20:57.625 ++ NET_TYPE=virt 00:20:57.625 ++ SPDK_JSONRPC_GO_CLIENT=1 00:20:57.625 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:20:57.625 ++ RUN_NIGHTLY=0 00:20:57.625 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:20:57.625 + [[ -n '' ]] 00:20:57.625 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:20:57.625 + for M in /var/spdk/build-*-manifest.txt 00:20:57.625 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:20:57.625 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:20:57.625 + for M in /var/spdk/build-*-manifest.txt 00:20:57.625 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:20:57.625 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:20:57.625 + for M in /var/spdk/build-*-manifest.txt 00:20:57.625 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:20:57.625 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:20:57.625 ++ uname 00:20:57.625 + [[ Linux == \L\i\n\u\x ]] 00:20:57.625 + sudo dmesg -T 00:20:57.625 + sudo dmesg --clear 00:20:57.625 + dmesg_pid=5266 00:20:57.625 + sudo dmesg -Tw 00:20:57.625 + [[ Fedora Linux == FreeBSD ]] 00:20:57.625 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:20:57.625 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:20:57.625 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:20:57.625 + [[ -x /usr/src/fio-static/fio ]] 00:20:57.625 + export FIO_BIN=/usr/src/fio-static/fio 00:20:57.625 + FIO_BIN=/usr/src/fio-static/fio 00:20:57.625 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:20:57.625 + [[ ! -v VFIO_QEMU_BIN ]] 00:20:57.625 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:20:57.625 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:20:57.625 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:20:57.625 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:20:57.625 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:20:57.625 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:20:57.625 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:20:57.625 05:32:17 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:20:57.625 05:32:17 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:20:57.625 05:32:17 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:20:57.625 05:32:17 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:20:57.625 05:32:17 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:20:57.625 05:32:17 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:20:57.625 05:32:17 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:20:57.625 05:32:17 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:20:57.625 05:32:17 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:20:57.625 05:32:17 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:20:57.625 05:32:17 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:20:57.625 05:32:17 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:20:57.625 05:32:17 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:20:57.625 05:32:17 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:20:57.887 05:32:17 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:20:57.887 05:32:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:57.887 05:32:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:20:57.887 05:32:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:57.887 05:32:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.887 05:32:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.887 05:32:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.887 05:32:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.887 05:32:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.887 05:32:17 -- paths/export.sh@5 -- $ export PATH 00:20:57.888 05:32:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.888 05:32:17 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:57.888 05:32:17 -- common/autobuild_common.sh@486 -- $ date +%s 00:20:57.888 05:32:17 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732080737.XXXXXX 00:20:57.888 05:32:17 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732080737.2CZawA 00:20:57.888 05:32:17 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:20:57.888 05:32:17 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:20:57.888 05:32:17 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:57.888 05:32:17 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:57.888 05:32:17 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:57.888 05:32:17 -- common/autobuild_common.sh@502 -- $ get_config_params 00:20:57.888 05:32:17 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:20:57.888 05:32:17 -- common/autotest_common.sh@10 -- $ set +x 00:20:57.888 05:32:17 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:20:57.888 05:32:17 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:20:57.888 05:32:17 -- pm/common@17 -- $ local monitor 00:20:57.888 05:32:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:57.888 05:32:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:57.888 05:32:17 -- pm/common@25 -- $ sleep 1 00:20:57.888 05:32:17 -- pm/common@21 -- $ date +%s 00:20:57.888 05:32:17 -- pm/common@21 -- $ date +%s 00:20:57.888 05:32:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732080737 00:20:57.888 05:32:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732080737 00:20:57.888 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732080737_collect-cpu-load.pm.log 00:20:57.888 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732080737_collect-vmstat.pm.log 00:20:58.822 05:32:18 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:20:58.822 05:32:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:20:58.822 05:32:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:20:58.822 05:32:18 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:20:58.822 05:32:18 -- spdk/autobuild.sh@16 -- $ date -u 00:20:58.822 Wed Nov 20 05:32:18 AM UTC 2024 00:20:58.822 05:32:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:20:58.822 v25.01-pre-192-g57b682926 00:20:58.822 05:32:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:20:58.822 05:32:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:20:58.822 05:32:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:20:58.822 05:32:18 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:20:58.822 05:32:18 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:20:58.822 05:32:18 -- common/autotest_common.sh@10 -- $ set +x 00:20:58.822 ************************************ 00:20:58.822 START TEST ubsan 00:20:58.822 ************************************ 00:20:58.822 using ubsan 00:20:58.822 05:32:18 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:20:58.822 00:20:58.822 real 0m0.000s 00:20:58.822 user 0m0.000s 00:20:58.822 sys 0m0.000s 00:20:58.822 05:32:18 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:20:58.822 ************************************ 00:20:58.822 END TEST ubsan 00:20:58.822 ************************************ 00:20:58.822 05:32:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:20:58.822 05:32:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:20:58.822 05:32:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:20:58.822 05:32:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:20:58.822 05:32:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:20:58.822 05:32:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:20:58.822 05:32:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:20:58.822 05:32:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:20:58.822 05:32:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:20:58.822 05:32:18 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:20:59.080 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:20:59.080 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:20:59.646 Using 'verbs' RDMA provider 00:21:15.541 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:21:30.441 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:21:30.441 go version go1.21.1 linux/amd64 00:21:30.441 Creating mk/config.mk...done. 00:21:30.441 Creating mk/cc.flags.mk...done. 00:21:30.441 Type 'make' to build. 00:21:30.441 05:32:48 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:21:30.441 05:32:48 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:21:30.441 05:32:48 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:21:30.441 05:32:48 -- common/autotest_common.sh@10 -- $ set +x 00:21:30.441 ************************************ 00:21:30.441 START TEST make 00:21:30.441 ************************************ 00:21:30.441 05:32:48 make -- common/autotest_common.sh@1127 -- $ make -j10 00:21:30.441 make[1]: Nothing to be done for 'all'. 00:21:45.346 The Meson build system 00:21:45.346 Version: 1.5.0 00:21:45.346 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:21:45.346 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:21:45.346 Build type: native build 00:21:45.346 Program cat found: YES (/usr/bin/cat) 00:21:45.346 Project name: DPDK 00:21:45.346 Project version: 24.03.0 00:21:45.346 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:21:45.346 C linker for the host machine: cc ld.bfd 2.40-14 00:21:45.346 Host machine cpu family: x86_64 00:21:45.346 Host machine cpu: x86_64 00:21:45.346 Message: ## Building in Developer Mode ## 00:21:45.346 Program pkg-config found: YES (/usr/bin/pkg-config) 00:21:45.346 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:21:45.346 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:21:45.346 Program python3 found: YES (/usr/bin/python3) 00:21:45.346 Program cat found: YES (/usr/bin/cat) 00:21:45.346 Compiler for C supports arguments -march=native: YES 00:21:45.346 Checking for size of "void *" : 8 00:21:45.346 Checking for size of "void *" : 8 (cached) 00:21:45.346 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:21:45.346 Library m found: YES 00:21:45.346 Library numa found: YES 00:21:45.346 Has header "numaif.h" : YES 00:21:45.346 Library fdt found: NO 00:21:45.346 Library execinfo found: NO 00:21:45.346 Has header "execinfo.h" : YES 00:21:45.346 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:21:45.346 Run-time dependency libarchive found: NO (tried pkgconfig) 00:21:45.346 Run-time dependency libbsd found: NO (tried pkgconfig) 00:21:45.346 Run-time dependency jansson found: NO (tried pkgconfig) 00:21:45.346 Run-time dependency openssl found: YES 3.1.1 00:21:45.346 Run-time dependency libpcap found: YES 1.10.4 00:21:45.346 Has header "pcap.h" with dependency libpcap: YES 00:21:45.346 Compiler for C supports arguments -Wcast-qual: YES 00:21:45.346 Compiler for C supports arguments -Wdeprecated: YES 00:21:45.346 Compiler for C supports arguments -Wformat: YES 00:21:45.346 Compiler for C supports arguments -Wformat-nonliteral: NO 00:21:45.346 Compiler for C supports arguments -Wformat-security: NO 00:21:45.346 Compiler for C supports arguments -Wmissing-declarations: YES 00:21:45.346 Compiler for C supports arguments -Wmissing-prototypes: YES 00:21:45.346 Compiler for C supports arguments -Wnested-externs: YES 00:21:45.346 Compiler for C supports arguments -Wold-style-definition: YES 00:21:45.346 Compiler for C supports arguments -Wpointer-arith: YES 00:21:45.346 Compiler for C supports arguments -Wsign-compare: YES 00:21:45.346 Compiler for C supports arguments -Wstrict-prototypes: YES 00:21:45.346 Compiler for C supports arguments -Wundef: YES 00:21:45.346 Compiler for C supports arguments -Wwrite-strings: YES 00:21:45.346 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:21:45.346 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:21:45.346 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:21:45.346 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:21:45.346 Program objdump found: YES (/usr/bin/objdump) 00:21:45.346 Compiler for C supports arguments -mavx512f: YES 00:21:45.346 Checking if "AVX512 checking" compiles: YES 00:21:45.346 Fetching value of define "__SSE4_2__" : 1 00:21:45.346 Fetching value of define "__AES__" : 1 00:21:45.346 Fetching value of define "__AVX__" : 1 00:21:45.346 Fetching value of define "__AVX2__" : 1 00:21:45.346 Fetching value of define "__AVX512BW__" : 1 00:21:45.346 Fetching value of define "__AVX512CD__" : 1 00:21:45.346 Fetching value of define "__AVX512DQ__" : 1 00:21:45.346 Fetching value of define "__AVX512F__" : 1 00:21:45.346 Fetching value of define "__AVX512VL__" : 1 00:21:45.346 Fetching value of define "__PCLMUL__" : 1 00:21:45.346 Fetching value of define "__RDRND__" : 1 00:21:45.346 Fetching value of define "__RDSEED__" : 1 00:21:45.346 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:21:45.346 Fetching value of define "__znver1__" : (undefined) 00:21:45.346 Fetching value of define "__znver2__" : (undefined) 00:21:45.346 Fetching value of define "__znver3__" : (undefined) 00:21:45.346 Fetching value of define "__znver4__" : (undefined) 00:21:45.346 Compiler for C supports arguments -Wno-format-truncation: YES 00:21:45.346 Message: lib/log: Defining dependency "log" 00:21:45.346 Message: lib/kvargs: Defining dependency "kvargs" 00:21:45.346 Message: lib/telemetry: Defining dependency "telemetry" 00:21:45.346 Checking for function "getentropy" : NO 00:21:45.346 Message: lib/eal: Defining dependency "eal" 00:21:45.346 Message: lib/ring: Defining dependency "ring" 00:21:45.346 Message: lib/rcu: Defining dependency "rcu" 00:21:45.346 Message: lib/mempool: Defining dependency "mempool" 00:21:45.346 Message: lib/mbuf: Defining dependency "mbuf" 00:21:45.346 Fetching value of define "__PCLMUL__" : 1 (cached) 00:21:45.346 Fetching value of define "__AVX512F__" : 1 (cached) 00:21:45.346 Fetching value of define "__AVX512BW__" : 1 (cached) 00:21:45.346 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:21:45.346 Fetching value of define "__AVX512VL__" : 1 (cached) 00:21:45.346 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:21:45.346 Compiler for C supports arguments -mpclmul: YES 00:21:45.346 Compiler for C supports arguments -maes: YES 00:21:45.346 Compiler for C supports arguments -mavx512f: YES (cached) 00:21:45.347 Compiler for C supports arguments -mavx512bw: YES 00:21:45.347 Compiler for C supports arguments -mavx512dq: YES 00:21:45.347 Compiler for C supports arguments -mavx512vl: YES 00:21:45.347 Compiler for C supports arguments -mvpclmulqdq: YES 00:21:45.347 Compiler for C supports arguments -mavx2: YES 00:21:45.347 Compiler for C supports arguments -mavx: YES 00:21:45.347 Message: lib/net: Defining dependency "net" 00:21:45.347 Message: lib/meter: Defining dependency "meter" 00:21:45.347 Message: lib/ethdev: Defining dependency "ethdev" 00:21:45.347 Message: lib/pci: Defining dependency "pci" 00:21:45.347 Message: lib/cmdline: Defining dependency "cmdline" 00:21:45.347 Message: lib/hash: Defining dependency "hash" 00:21:45.347 Message: lib/timer: Defining dependency "timer" 00:21:45.347 Message: lib/compressdev: Defining dependency "compressdev" 00:21:45.347 Message: lib/cryptodev: Defining dependency "cryptodev" 00:21:45.347 Message: lib/dmadev: Defining dependency "dmadev" 00:21:45.347 Compiler for C supports arguments -Wno-cast-qual: YES 00:21:45.347 Message: lib/power: Defining dependency "power" 00:21:45.347 Message: lib/reorder: Defining dependency "reorder" 00:21:45.347 Message: lib/security: Defining dependency "security" 00:21:45.347 Has header "linux/userfaultfd.h" : YES 00:21:45.347 Has header "linux/vduse.h" : YES 00:21:45.347 Message: lib/vhost: Defining dependency "vhost" 00:21:45.347 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:21:45.347 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:21:45.347 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:21:45.347 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:21:45.347 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:21:45.347 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:21:45.347 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:21:45.347 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:21:45.347 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:21:45.347 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:21:45.347 Program doxygen found: YES (/usr/local/bin/doxygen) 00:21:45.347 Configuring doxy-api-html.conf using configuration 00:21:45.347 Configuring doxy-api-man.conf using configuration 00:21:45.347 Program mandb found: YES (/usr/bin/mandb) 00:21:45.347 Program sphinx-build found: NO 00:21:45.347 Configuring rte_build_config.h using configuration 00:21:45.347 Message: 00:21:45.347 ================= 00:21:45.347 Applications Enabled 00:21:45.347 ================= 00:21:45.347 00:21:45.347 apps: 00:21:45.347 00:21:45.347 00:21:45.347 Message: 00:21:45.347 ================= 00:21:45.347 Libraries Enabled 00:21:45.347 ================= 00:21:45.347 00:21:45.347 libs: 00:21:45.347 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:21:45.347 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:21:45.347 cryptodev, dmadev, power, reorder, security, vhost, 00:21:45.347 00:21:45.347 Message: 00:21:45.347 =============== 00:21:45.347 Drivers Enabled 00:21:45.347 =============== 00:21:45.347 00:21:45.347 common: 00:21:45.347 00:21:45.347 bus: 00:21:45.347 pci, vdev, 00:21:45.347 mempool: 00:21:45.347 ring, 00:21:45.347 dma: 00:21:45.347 00:21:45.347 net: 00:21:45.347 00:21:45.347 crypto: 00:21:45.347 00:21:45.347 compress: 00:21:45.347 00:21:45.347 vdpa: 00:21:45.347 00:21:45.347 00:21:45.347 Message: 00:21:45.347 ================= 00:21:45.347 Content Skipped 00:21:45.347 ================= 00:21:45.347 00:21:45.347 apps: 00:21:45.347 dumpcap: explicitly disabled via build config 00:21:45.347 graph: explicitly disabled via build config 00:21:45.347 pdump: explicitly disabled via build config 00:21:45.347 proc-info: explicitly disabled via build config 00:21:45.347 test-acl: explicitly disabled via build config 00:21:45.347 test-bbdev: explicitly disabled via build config 00:21:45.347 test-cmdline: explicitly disabled via build config 00:21:45.347 test-compress-perf: explicitly disabled via build config 00:21:45.347 test-crypto-perf: explicitly disabled via build config 00:21:45.347 test-dma-perf: explicitly disabled via build config 00:21:45.347 test-eventdev: explicitly disabled via build config 00:21:45.347 test-fib: explicitly disabled via build config 00:21:45.347 test-flow-perf: explicitly disabled via build config 00:21:45.347 test-gpudev: explicitly disabled via build config 00:21:45.347 test-mldev: explicitly disabled via build config 00:21:45.347 test-pipeline: explicitly disabled via build config 00:21:45.347 test-pmd: explicitly disabled via build config 00:21:45.347 test-regex: explicitly disabled via build config 00:21:45.347 test-sad: explicitly disabled via build config 00:21:45.347 test-security-perf: explicitly disabled via build config 00:21:45.347 00:21:45.347 libs: 00:21:45.347 argparse: explicitly disabled via build config 00:21:45.347 metrics: explicitly disabled via build config 00:21:45.347 acl: explicitly disabled via build config 00:21:45.347 bbdev: explicitly disabled via build config 00:21:45.347 bitratestats: explicitly disabled via build config 00:21:45.347 bpf: explicitly disabled via build config 00:21:45.347 cfgfile: explicitly disabled via build config 00:21:45.347 distributor: explicitly disabled via build config 00:21:45.347 efd: explicitly disabled via build config 00:21:45.347 eventdev: explicitly disabled via build config 00:21:45.347 dispatcher: explicitly disabled via build config 00:21:45.347 gpudev: explicitly disabled via build config 00:21:45.347 gro: explicitly disabled via build config 00:21:45.347 gso: explicitly disabled via build config 00:21:45.347 ip_frag: explicitly disabled via build config 00:21:45.347 jobstats: explicitly disabled via build config 00:21:45.347 latencystats: explicitly disabled via build config 00:21:45.347 lpm: explicitly disabled via build config 00:21:45.347 member: explicitly disabled via build config 00:21:45.347 pcapng: explicitly disabled via build config 00:21:45.347 rawdev: explicitly disabled via build config 00:21:45.347 regexdev: explicitly disabled via build config 00:21:45.347 mldev: explicitly disabled via build config 00:21:45.347 rib: explicitly disabled via build config 00:21:45.347 sched: explicitly disabled via build config 00:21:45.347 stack: explicitly disabled via build config 00:21:45.347 ipsec: explicitly disabled via build config 00:21:45.347 pdcp: explicitly disabled via build config 00:21:45.347 fib: explicitly disabled via build config 00:21:45.347 port: explicitly disabled via build config 00:21:45.347 pdump: explicitly disabled via build config 00:21:45.347 table: explicitly disabled via build config 00:21:45.347 pipeline: explicitly disabled via build config 00:21:45.347 graph: explicitly disabled via build config 00:21:45.347 node: explicitly disabled via build config 00:21:45.347 00:21:45.347 drivers: 00:21:45.347 common/cpt: not in enabled drivers build config 00:21:45.347 common/dpaax: not in enabled drivers build config 00:21:45.347 common/iavf: not in enabled drivers build config 00:21:45.347 common/idpf: not in enabled drivers build config 00:21:45.347 common/ionic: not in enabled drivers build config 00:21:45.347 common/mvep: not in enabled drivers build config 00:21:45.347 common/octeontx: not in enabled drivers build config 00:21:45.347 bus/auxiliary: not in enabled drivers build config 00:21:45.347 bus/cdx: not in enabled drivers build config 00:21:45.347 bus/dpaa: not in enabled drivers build config 00:21:45.347 bus/fslmc: not in enabled drivers build config 00:21:45.347 bus/ifpga: not in enabled drivers build config 00:21:45.347 bus/platform: not in enabled drivers build config 00:21:45.347 bus/uacce: not in enabled drivers build config 00:21:45.347 bus/vmbus: not in enabled drivers build config 00:21:45.347 common/cnxk: not in enabled drivers build config 00:21:45.347 common/mlx5: not in enabled drivers build config 00:21:45.347 common/nfp: not in enabled drivers build config 00:21:45.348 common/nitrox: not in enabled drivers build config 00:21:45.348 common/qat: not in enabled drivers build config 00:21:45.348 common/sfc_efx: not in enabled drivers build config 00:21:45.348 mempool/bucket: not in enabled drivers build config 00:21:45.348 mempool/cnxk: not in enabled drivers build config 00:21:45.348 mempool/dpaa: not in enabled drivers build config 00:21:45.348 mempool/dpaa2: not in enabled drivers build config 00:21:45.348 mempool/octeontx: not in enabled drivers build config 00:21:45.348 mempool/stack: not in enabled drivers build config 00:21:45.348 dma/cnxk: not in enabled drivers build config 00:21:45.348 dma/dpaa: not in enabled drivers build config 00:21:45.348 dma/dpaa2: not in enabled drivers build config 00:21:45.348 dma/hisilicon: not in enabled drivers build config 00:21:45.348 dma/idxd: not in enabled drivers build config 00:21:45.348 dma/ioat: not in enabled drivers build config 00:21:45.348 dma/skeleton: not in enabled drivers build config 00:21:45.348 net/af_packet: not in enabled drivers build config 00:21:45.348 net/af_xdp: not in enabled drivers build config 00:21:45.348 net/ark: not in enabled drivers build config 00:21:45.348 net/atlantic: not in enabled drivers build config 00:21:45.348 net/avp: not in enabled drivers build config 00:21:45.348 net/axgbe: not in enabled drivers build config 00:21:45.348 net/bnx2x: not in enabled drivers build config 00:21:45.348 net/bnxt: not in enabled drivers build config 00:21:45.348 net/bonding: not in enabled drivers build config 00:21:45.348 net/cnxk: not in enabled drivers build config 00:21:45.348 net/cpfl: not in enabled drivers build config 00:21:45.348 net/cxgbe: not in enabled drivers build config 00:21:45.348 net/dpaa: not in enabled drivers build config 00:21:45.348 net/dpaa2: not in enabled drivers build config 00:21:45.348 net/e1000: not in enabled drivers build config 00:21:45.348 net/ena: not in enabled drivers build config 00:21:45.348 net/enetc: not in enabled drivers build config 00:21:45.348 net/enetfec: not in enabled drivers build config 00:21:45.348 net/enic: not in enabled drivers build config 00:21:45.348 net/failsafe: not in enabled drivers build config 00:21:45.348 net/fm10k: not in enabled drivers build config 00:21:45.348 net/gve: not in enabled drivers build config 00:21:45.348 net/hinic: not in enabled drivers build config 00:21:45.348 net/hns3: not in enabled drivers build config 00:21:45.348 net/i40e: not in enabled drivers build config 00:21:45.348 net/iavf: not in enabled drivers build config 00:21:45.348 net/ice: not in enabled drivers build config 00:21:45.348 net/idpf: not in enabled drivers build config 00:21:45.348 net/igc: not in enabled drivers build config 00:21:45.348 net/ionic: not in enabled drivers build config 00:21:45.348 net/ipn3ke: not in enabled drivers build config 00:21:45.348 net/ixgbe: not in enabled drivers build config 00:21:45.348 net/mana: not in enabled drivers build config 00:21:45.348 net/memif: not in enabled drivers build config 00:21:45.348 net/mlx4: not in enabled drivers build config 00:21:45.348 net/mlx5: not in enabled drivers build config 00:21:45.348 net/mvneta: not in enabled drivers build config 00:21:45.348 net/mvpp2: not in enabled drivers build config 00:21:45.348 net/netvsc: not in enabled drivers build config 00:21:45.348 net/nfb: not in enabled drivers build config 00:21:45.348 net/nfp: not in enabled drivers build config 00:21:45.348 net/ngbe: not in enabled drivers build config 00:21:45.348 net/null: not in enabled drivers build config 00:21:45.348 net/octeontx: not in enabled drivers build config 00:21:45.348 net/octeon_ep: not in enabled drivers build config 00:21:45.348 net/pcap: not in enabled drivers build config 00:21:45.348 net/pfe: not in enabled drivers build config 00:21:45.348 net/qede: not in enabled drivers build config 00:21:45.348 net/ring: not in enabled drivers build config 00:21:45.348 net/sfc: not in enabled drivers build config 00:21:45.348 net/softnic: not in enabled drivers build config 00:21:45.348 net/tap: not in enabled drivers build config 00:21:45.348 net/thunderx: not in enabled drivers build config 00:21:45.348 net/txgbe: not in enabled drivers build config 00:21:45.348 net/vdev_netvsc: not in enabled drivers build config 00:21:45.348 net/vhost: not in enabled drivers build config 00:21:45.348 net/virtio: not in enabled drivers build config 00:21:45.348 net/vmxnet3: not in enabled drivers build config 00:21:45.348 raw/*: missing internal dependency, "rawdev" 00:21:45.348 crypto/armv8: not in enabled drivers build config 00:21:45.348 crypto/bcmfs: not in enabled drivers build config 00:21:45.348 crypto/caam_jr: not in enabled drivers build config 00:21:45.348 crypto/ccp: not in enabled drivers build config 00:21:45.348 crypto/cnxk: not in enabled drivers build config 00:21:45.348 crypto/dpaa_sec: not in enabled drivers build config 00:21:45.348 crypto/dpaa2_sec: not in enabled drivers build config 00:21:45.348 crypto/ipsec_mb: not in enabled drivers build config 00:21:45.348 crypto/mlx5: not in enabled drivers build config 00:21:45.348 crypto/mvsam: not in enabled drivers build config 00:21:45.348 crypto/nitrox: not in enabled drivers build config 00:21:45.348 crypto/null: not in enabled drivers build config 00:21:45.348 crypto/octeontx: not in enabled drivers build config 00:21:45.348 crypto/openssl: not in enabled drivers build config 00:21:45.348 crypto/scheduler: not in enabled drivers build config 00:21:45.348 crypto/uadk: not in enabled drivers build config 00:21:45.348 crypto/virtio: not in enabled drivers build config 00:21:45.348 compress/isal: not in enabled drivers build config 00:21:45.348 compress/mlx5: not in enabled drivers build config 00:21:45.348 compress/nitrox: not in enabled drivers build config 00:21:45.348 compress/octeontx: not in enabled drivers build config 00:21:45.348 compress/zlib: not in enabled drivers build config 00:21:45.348 regex/*: missing internal dependency, "regexdev" 00:21:45.348 ml/*: missing internal dependency, "mldev" 00:21:45.348 vdpa/ifc: not in enabled drivers build config 00:21:45.348 vdpa/mlx5: not in enabled drivers build config 00:21:45.348 vdpa/nfp: not in enabled drivers build config 00:21:45.348 vdpa/sfc: not in enabled drivers build config 00:21:45.348 event/*: missing internal dependency, "eventdev" 00:21:45.348 baseband/*: missing internal dependency, "bbdev" 00:21:45.348 gpu/*: missing internal dependency, "gpudev" 00:21:45.348 00:21:45.348 00:21:45.348 Build targets in project: 85 00:21:45.348 00:21:45.348 DPDK 24.03.0 00:21:45.348 00:21:45.348 User defined options 00:21:45.348 buildtype : debug 00:21:45.348 default_library : shared 00:21:45.348 libdir : lib 00:21:45.348 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:21:45.348 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:21:45.348 c_link_args : 00:21:45.348 cpu_instruction_set: native 00:21:45.348 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:21:45.348 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:21:45.348 enable_docs : false 00:21:45.348 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:21:45.348 enable_kmods : false 00:21:45.348 max_lcores : 128 00:21:45.348 tests : false 00:21:45.348 00:21:45.348 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:21:45.607 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:21:45.607 [1/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:21:45.607 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:21:45.607 [3/268] Linking static target lib/librte_log.a 00:21:45.607 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:21:45.607 [5/268] Linking static target lib/librte_kvargs.a 00:21:45.865 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:21:46.123 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:21:46.381 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:21:46.381 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:21:46.381 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:21:46.381 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:21:46.381 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:21:46.381 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:21:46.381 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:21:46.381 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:21:46.381 [16/268] Linking static target lib/librte_telemetry.a 00:21:46.641 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:21:46.641 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:21:46.901 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:21:46.901 [20/268] Linking target lib/librte_log.so.24.1 00:21:47.160 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:21:47.160 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:21:47.160 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:21:47.160 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:21:47.160 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:21:47.160 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:21:47.160 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:21:47.160 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:21:47.418 [29/268] Linking target lib/librte_kvargs.so.24.1 00:21:47.418 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:21:47.418 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:21:47.418 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:21:47.676 [33/268] Linking target lib/librte_telemetry.so.24.1 00:21:47.676 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:21:47.676 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:21:47.676 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:21:47.935 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:21:47.935 [38/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:21:47.935 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:21:47.935 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:21:47.935 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:21:47.935 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:21:47.935 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:21:48.194 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:21:48.194 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:21:48.194 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:21:48.194 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:21:48.452 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:21:48.452 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:21:48.709 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:21:48.709 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:21:48.709 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:21:48.968 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:21:48.968 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:21:48.968 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:21:48.968 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:21:48.968 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:21:48.968 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:21:48.968 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:21:49.226 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:21:49.486 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:21:49.486 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:21:49.486 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:21:49.486 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:21:49.744 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:21:49.744 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:21:49.744 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:21:49.744 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:21:50.002 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:21:50.258 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:21:50.258 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:21:50.258 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:21:50.258 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:21:50.258 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:21:50.258 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:21:50.516 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:21:50.516 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:21:50.516 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:21:50.773 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:21:51.031 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:21:51.031 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:21:51.031 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:21:51.031 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:21:51.289 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:21:51.289 [85/268] Linking static target lib/librte_eal.a 00:21:51.289 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:21:51.289 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:21:51.289 [88/268] Linking static target lib/librte_ring.a 00:21:51.289 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:21:51.289 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:21:51.547 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:21:51.547 [92/268] Linking static target lib/librte_mempool.a 00:21:51.805 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:21:51.806 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:21:51.806 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:21:51.806 [96/268] Linking static target lib/librte_rcu.a 00:21:52.063 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:21:52.063 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:21:52.063 [99/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:21:52.063 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:21:52.321 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:21:52.578 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:21:52.578 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:21:52.578 [104/268] Linking static target lib/librte_mbuf.a 00:21:52.578 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:21:52.578 [106/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:21:52.836 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:21:52.836 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:21:52.836 [109/268] Linking static target lib/librte_net.a 00:21:53.094 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:21:53.094 [111/268] Linking static target lib/librte_meter.a 00:21:53.352 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:21:53.352 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:21:53.352 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:21:53.610 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:21:53.610 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:21:53.610 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:21:53.869 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:21:53.869 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:21:54.127 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:21:54.385 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:21:54.385 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:21:54.644 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:21:54.902 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:21:54.902 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:21:54.902 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:21:54.902 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:21:54.902 [128/268] Linking static target lib/librte_pci.a 00:21:54.902 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:21:54.902 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:21:54.902 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:21:55.160 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:21:55.160 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:21:55.160 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:21:55.160 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:21:55.160 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:21:55.160 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:21:55.418 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:21:55.418 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:21:55.418 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:21:55.418 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:21:55.418 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:21:55.418 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:21:55.418 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:21:55.418 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:21:55.418 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:21:55.418 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:21:55.418 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:21:55.675 [149/268] Linking static target lib/librte_cmdline.a 00:21:55.675 [150/268] Linking static target lib/librte_ethdev.a 00:21:55.933 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:21:56.191 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:21:56.191 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:21:56.191 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:21:56.191 [155/268] Linking static target lib/librte_hash.a 00:21:56.191 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:21:56.448 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:21:56.448 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:21:56.448 [159/268] Linking static target lib/librte_timer.a 00:21:56.706 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:21:56.706 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:21:56.706 [162/268] Linking static target lib/librte_compressdev.a 00:21:56.964 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:21:57.222 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:21:57.222 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:21:57.222 [166/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:21:57.222 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:21:57.222 [168/268] Linking static target lib/librte_dmadev.a 00:21:57.480 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:21:57.737 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:21:57.737 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:21:57.737 [172/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:21:57.737 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:21:57.995 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:21:57.995 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:58.253 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:21:58.253 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:21:58.510 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:21:58.510 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:21:58.510 [180/268] Linking static target lib/librte_cryptodev.a 00:21:58.510 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:21:58.510 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:58.843 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:21:58.843 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:21:59.100 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:21:59.100 [186/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:21:59.100 [187/268] Linking static target lib/librte_power.a 00:21:59.358 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:21:59.358 [189/268] Linking static target lib/librte_reorder.a 00:21:59.358 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:21:59.358 [191/268] Linking static target lib/librte_security.a 00:21:59.617 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:21:59.876 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:22:00.134 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:22:00.392 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:22:00.649 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:22:00.649 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:22:00.649 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:22:00.907 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:22:00.907 [200/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:22:01.164 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:22:01.164 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:22:01.731 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:22:01.731 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:22:01.731 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:22:01.731 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:22:01.731 [207/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:22:01.989 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:22:01.989 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:22:01.989 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:22:01.989 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:22:01.989 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:22:02.288 [213/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:22:02.288 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:22:02.288 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:22:02.288 [216/268] Linking static target drivers/librte_bus_vdev.a 00:22:02.585 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:22:02.585 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:22:02.585 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:22:02.585 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:22:02.585 [221/268] Linking static target drivers/librte_bus_pci.a 00:22:02.585 [222/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:22:02.585 [223/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:22:02.843 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:22:02.844 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:22:02.844 [226/268] Linking static target drivers/librte_mempool_ring.a 00:22:02.844 [227/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:22:03.102 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:22:03.669 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:22:03.669 [230/268] Linking static target lib/librte_vhost.a 00:22:05.570 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:22:05.570 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:22:05.570 [233/268] Linking target lib/librte_eal.so.24.1 00:22:05.830 [234/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:22:05.830 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:22:05.830 [236/268] Linking target lib/librte_pci.so.24.1 00:22:05.830 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:22:05.830 [238/268] Linking target lib/librte_ring.so.24.1 00:22:05.830 [239/268] Linking target lib/librte_dmadev.so.24.1 00:22:05.830 [240/268] Linking target lib/librte_timer.so.24.1 00:22:05.830 [241/268] Linking target lib/librte_meter.so.24.1 00:22:06.089 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:22:06.089 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:22:06.089 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:22:06.089 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:22:06.089 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:22:06.089 [247/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:22:06.089 [248/268] Linking target lib/librte_mempool.so.24.1 00:22:06.089 [249/268] Linking target lib/librte_rcu.so.24.1 00:22:06.346 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:22:06.346 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:22:06.346 [252/268] Linking target lib/librte_mbuf.so.24.1 00:22:06.346 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:22:06.605 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:22:06.605 [255/268] Linking target lib/librte_net.so.24.1 00:22:06.605 [256/268] Linking target lib/librte_compressdev.so.24.1 00:22:06.605 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:22:06.605 [258/268] Linking target lib/librte_reorder.so.24.1 00:22:06.605 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:22:06.864 [260/268] Linking target lib/librte_hash.so.24.1 00:22:06.864 [261/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:22:06.864 [262/268] Linking target lib/librte_cmdline.so.24.1 00:22:06.864 [263/268] Linking target lib/librte_ethdev.so.24.1 00:22:06.864 [264/268] Linking target lib/librte_security.so.24.1 00:22:06.864 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:22:07.122 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:22:07.123 [267/268] Linking target lib/librte_power.so.24.1 00:22:07.123 [268/268] Linking target lib/librte_vhost.so.24.1 00:22:07.123 INFO: autodetecting backend as ninja 00:22:07.123 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:22:33.785 CC lib/ut_mock/mock.o 00:22:33.785 CC lib/log/log_flags.o 00:22:33.785 CC lib/log/log.o 00:22:33.785 CC lib/log/log_deprecated.o 00:22:33.785 CC lib/ut/ut.o 00:22:33.785 LIB libspdk_ut_mock.a 00:22:33.785 LIB libspdk_log.a 00:22:33.785 LIB libspdk_ut.a 00:22:33.785 SO libspdk_ut_mock.so.6.0 00:22:33.785 SO libspdk_ut.so.2.0 00:22:33.785 SO libspdk_log.so.7.1 00:22:33.785 SYMLINK libspdk_ut_mock.so 00:22:33.785 SYMLINK libspdk_ut.so 00:22:34.043 SYMLINK libspdk_log.so 00:22:34.300 CXX lib/trace_parser/trace.o 00:22:34.300 CC lib/dma/dma.o 00:22:34.300 CC lib/ioat/ioat.o 00:22:34.300 CC lib/util/base64.o 00:22:34.300 CC lib/util/bit_array.o 00:22:34.300 CC lib/util/cpuset.o 00:22:34.300 CC lib/util/crc16.o 00:22:34.300 CC lib/util/crc32.o 00:22:34.300 CC lib/util/crc32c.o 00:22:34.300 CC lib/vfio_user/host/vfio_user_pci.o 00:22:34.300 CC lib/vfio_user/host/vfio_user.o 00:22:34.300 CC lib/util/crc32_ieee.o 00:22:34.300 CC lib/util/crc64.o 00:22:34.557 CC lib/util/dif.o 00:22:34.557 CC lib/util/fd.o 00:22:34.557 LIB libspdk_dma.a 00:22:34.557 CC lib/util/fd_group.o 00:22:34.557 SO libspdk_dma.so.5.0 00:22:34.557 CC lib/util/file.o 00:22:34.557 SYMLINK libspdk_dma.so 00:22:34.557 CC lib/util/hexlify.o 00:22:34.557 CC lib/util/iov.o 00:22:34.557 LIB libspdk_ioat.a 00:22:34.557 CC lib/util/math.o 00:22:34.557 SO libspdk_ioat.so.7.0 00:22:34.557 LIB libspdk_vfio_user.a 00:22:34.815 CC lib/util/net.o 00:22:34.815 SO libspdk_vfio_user.so.5.0 00:22:34.815 SYMLINK libspdk_ioat.so 00:22:34.815 CC lib/util/pipe.o 00:22:34.815 CC lib/util/string.o 00:22:34.815 CC lib/util/strerror_tls.o 00:22:34.815 SYMLINK libspdk_vfio_user.so 00:22:34.815 CC lib/util/uuid.o 00:22:34.815 CC lib/util/xor.o 00:22:34.815 CC lib/util/zipf.o 00:22:34.815 CC lib/util/md5.o 00:22:35.074 LIB libspdk_util.a 00:22:35.333 SO libspdk_util.so.10.1 00:22:35.333 LIB libspdk_trace_parser.a 00:22:35.333 SO libspdk_trace_parser.so.6.0 00:22:35.592 SYMLINK libspdk_util.so 00:22:35.592 SYMLINK libspdk_trace_parser.so 00:22:35.850 CC lib/idxd/idxd_user.o 00:22:35.850 CC lib/idxd/idxd_kernel.o 00:22:35.850 CC lib/conf/conf.o 00:22:35.850 CC lib/idxd/idxd.o 00:22:35.850 CC lib/env_dpdk/env.o 00:22:35.850 CC lib/env_dpdk/pci.o 00:22:35.850 CC lib/env_dpdk/memory.o 00:22:35.850 CC lib/rdma_utils/rdma_utils.o 00:22:35.850 CC lib/json/json_parse.o 00:22:35.850 CC lib/vmd/vmd.o 00:22:35.850 CC lib/json/json_util.o 00:22:36.109 CC lib/json/json_write.o 00:22:36.109 CC lib/env_dpdk/init.o 00:22:36.109 CC lib/env_dpdk/threads.o 00:22:36.109 LIB libspdk_conf.a 00:22:36.367 CC lib/env_dpdk/pci_ioat.o 00:22:36.367 SO libspdk_conf.so.6.0 00:22:36.367 LIB libspdk_rdma_utils.a 00:22:36.367 CC lib/env_dpdk/pci_virtio.o 00:22:36.367 SO libspdk_rdma_utils.so.1.0 00:22:36.367 LIB libspdk_json.a 00:22:36.367 LIB libspdk_idxd.a 00:22:36.367 SYMLINK libspdk_conf.so 00:22:36.367 SO libspdk_json.so.6.0 00:22:36.367 CC lib/env_dpdk/pci_vmd.o 00:22:36.367 SYMLINK libspdk_rdma_utils.so 00:22:36.367 CC lib/env_dpdk/pci_idxd.o 00:22:36.367 SO libspdk_idxd.so.12.1 00:22:36.367 CC lib/env_dpdk/pci_event.o 00:22:36.367 CC lib/vmd/led.o 00:22:36.367 SYMLINK libspdk_json.so 00:22:36.367 CC lib/env_dpdk/sigbus_handler.o 00:22:36.626 SYMLINK libspdk_idxd.so 00:22:36.626 CC lib/env_dpdk/pci_dpdk.o 00:22:36.626 CC lib/env_dpdk/pci_dpdk_2207.o 00:22:36.626 CC lib/env_dpdk/pci_dpdk_2211.o 00:22:36.626 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:22:36.626 CC lib/jsonrpc/jsonrpc_server.o 00:22:36.626 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:22:36.626 CC lib/jsonrpc/jsonrpc_client.o 00:22:36.626 CC lib/rdma_provider/common.o 00:22:36.626 LIB libspdk_vmd.a 00:22:36.626 CC lib/rdma_provider/rdma_provider_verbs.o 00:22:36.626 SO libspdk_vmd.so.6.0 00:22:36.885 SYMLINK libspdk_vmd.so 00:22:36.885 LIB libspdk_jsonrpc.a 00:22:36.885 LIB libspdk_rdma_provider.a 00:22:36.885 SO libspdk_jsonrpc.so.6.0 00:22:36.885 SO libspdk_rdma_provider.so.7.0 00:22:37.144 SYMLINK libspdk_rdma_provider.so 00:22:37.144 SYMLINK libspdk_jsonrpc.so 00:22:37.403 LIB libspdk_env_dpdk.a 00:22:37.403 CC lib/rpc/rpc.o 00:22:37.403 SO libspdk_env_dpdk.so.15.1 00:22:37.662 LIB libspdk_rpc.a 00:22:37.662 SYMLINK libspdk_env_dpdk.so 00:22:37.662 SO libspdk_rpc.so.6.0 00:22:37.662 SYMLINK libspdk_rpc.so 00:22:37.921 CC lib/trace/trace.o 00:22:37.921 CC lib/trace/trace_flags.o 00:22:37.921 CC lib/trace/trace_rpc.o 00:22:37.921 CC lib/notify/notify_rpc.o 00:22:37.921 CC lib/notify/notify.o 00:22:37.921 CC lib/keyring/keyring.o 00:22:37.921 CC lib/keyring/keyring_rpc.o 00:22:38.180 LIB libspdk_notify.a 00:22:38.180 SO libspdk_notify.so.6.0 00:22:38.180 SYMLINK libspdk_notify.so 00:22:38.180 LIB libspdk_keyring.a 00:22:38.180 LIB libspdk_trace.a 00:22:38.180 SO libspdk_keyring.so.2.0 00:22:38.180 SO libspdk_trace.so.11.0 00:22:38.438 SYMLINK libspdk_keyring.so 00:22:38.438 SYMLINK libspdk_trace.so 00:22:38.697 CC lib/thread/thread.o 00:22:38.697 CC lib/thread/iobuf.o 00:22:38.697 CC lib/sock/sock_rpc.o 00:22:38.697 CC lib/sock/sock.o 00:22:39.264 LIB libspdk_sock.a 00:22:39.265 SO libspdk_sock.so.10.0 00:22:39.265 SYMLINK libspdk_sock.so 00:22:39.522 CC lib/nvme/nvme_ctrlr_cmd.o 00:22:39.522 CC lib/nvme/nvme_fabric.o 00:22:39.522 CC lib/nvme/nvme_ctrlr.o 00:22:39.522 CC lib/nvme/nvme_ns.o 00:22:39.522 CC lib/nvme/nvme_pcie_common.o 00:22:39.522 CC lib/nvme/nvme_pcie.o 00:22:39.522 CC lib/nvme/nvme_ns_cmd.o 00:22:39.522 CC lib/nvme/nvme.o 00:22:39.522 CC lib/nvme/nvme_qpair.o 00:22:40.112 LIB libspdk_thread.a 00:22:40.370 SO libspdk_thread.so.11.0 00:22:40.370 SYMLINK libspdk_thread.so 00:22:40.370 CC lib/nvme/nvme_quirks.o 00:22:40.370 CC lib/nvme/nvme_transport.o 00:22:40.370 CC lib/nvme/nvme_discovery.o 00:22:40.629 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:22:40.629 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:22:40.629 CC lib/nvme/nvme_tcp.o 00:22:40.629 CC lib/nvme/nvme_opal.o 00:22:40.887 CC lib/nvme/nvme_io_msg.o 00:22:40.887 CC lib/nvme/nvme_poll_group.o 00:22:41.146 CC lib/accel/accel.o 00:22:41.146 CC lib/nvme/nvme_zns.o 00:22:41.146 CC lib/nvme/nvme_stubs.o 00:22:41.405 CC lib/nvme/nvme_auth.o 00:22:41.405 CC lib/nvme/nvme_cuse.o 00:22:41.406 CC lib/blob/blobstore.o 00:22:41.664 CC lib/blob/request.o 00:22:41.665 CC lib/nvme/nvme_rdma.o 00:22:41.923 CC lib/blob/zeroes.o 00:22:41.923 CC lib/blob/blob_bs_dev.o 00:22:42.181 CC lib/init/json_config.o 00:22:42.181 CC lib/accel/accel_rpc.o 00:22:42.181 CC lib/virtio/virtio.o 00:22:42.438 CC lib/virtio/virtio_vhost_user.o 00:22:42.438 CC lib/virtio/virtio_vfio_user.o 00:22:42.438 CC lib/init/subsystem.o 00:22:42.438 CC lib/fsdev/fsdev.o 00:22:42.438 CC lib/init/subsystem_rpc.o 00:22:42.438 CC lib/fsdev/fsdev_io.o 00:22:42.438 CC lib/accel/accel_sw.o 00:22:42.696 CC lib/fsdev/fsdev_rpc.o 00:22:42.696 CC lib/init/rpc.o 00:22:42.696 CC lib/virtio/virtio_pci.o 00:22:42.954 LIB libspdk_init.a 00:22:42.955 LIB libspdk_accel.a 00:22:42.955 SO libspdk_init.so.6.0 00:22:42.955 SO libspdk_accel.so.16.0 00:22:42.955 SYMLINK libspdk_init.so 00:22:42.955 SYMLINK libspdk_accel.so 00:22:43.238 LIB libspdk_virtio.a 00:22:43.238 LIB libspdk_fsdev.a 00:22:43.238 LIB libspdk_nvme.a 00:22:43.238 SO libspdk_virtio.so.7.0 00:22:43.238 SO libspdk_fsdev.so.2.0 00:22:43.238 CC lib/event/app.o 00:22:43.238 CC lib/event/reactor.o 00:22:43.238 CC lib/event/log_rpc.o 00:22:43.238 CC lib/event/app_rpc.o 00:22:43.238 CC lib/event/scheduler_static.o 00:22:43.238 SYMLINK libspdk_fsdev.so 00:22:43.238 SYMLINK libspdk_virtio.so 00:22:43.238 CC lib/bdev/bdev.o 00:22:43.238 CC lib/bdev/bdev_rpc.o 00:22:43.497 SO libspdk_nvme.so.15.0 00:22:43.497 CC lib/bdev/bdev_zone.o 00:22:43.497 CC lib/bdev/part.o 00:22:43.497 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:22:43.497 CC lib/bdev/scsi_nvme.o 00:22:43.756 SYMLINK libspdk_nvme.so 00:22:43.756 LIB libspdk_event.a 00:22:43.756 SO libspdk_event.so.14.0 00:22:44.014 SYMLINK libspdk_event.so 00:22:44.273 LIB libspdk_fuse_dispatcher.a 00:22:44.273 SO libspdk_fuse_dispatcher.so.1.0 00:22:44.273 SYMLINK libspdk_fuse_dispatcher.so 00:22:45.209 LIB libspdk_blob.a 00:22:45.209 SO libspdk_blob.so.11.0 00:22:45.209 SYMLINK libspdk_blob.so 00:22:45.467 CC lib/lvol/lvol.o 00:22:45.467 CC lib/blobfs/tree.o 00:22:45.467 CC lib/blobfs/blobfs.o 00:22:46.033 LIB libspdk_bdev.a 00:22:46.033 SO libspdk_bdev.so.17.0 00:22:46.290 SYMLINK libspdk_bdev.so 00:22:46.290 LIB libspdk_blobfs.a 00:22:46.291 SO libspdk_blobfs.so.10.0 00:22:46.291 CC lib/ftl/ftl_core.o 00:22:46.291 CC lib/ftl/ftl_layout.o 00:22:46.291 CC lib/ftl/ftl_init.o 00:22:46.291 CC lib/ftl/ftl_debug.o 00:22:46.291 CC lib/nvmf/ctrlr.o 00:22:46.291 CC lib/scsi/dev.o 00:22:46.291 CC lib/nbd/nbd.o 00:22:46.548 CC lib/ublk/ublk.o 00:22:46.548 SYMLINK libspdk_blobfs.so 00:22:46.548 CC lib/ublk/ublk_rpc.o 00:22:46.548 LIB libspdk_lvol.a 00:22:46.548 SO libspdk_lvol.so.10.0 00:22:46.548 SYMLINK libspdk_lvol.so 00:22:46.548 CC lib/ftl/ftl_io.o 00:22:46.548 CC lib/nvmf/ctrlr_discovery.o 00:22:46.806 CC lib/nvmf/ctrlr_bdev.o 00:22:46.806 CC lib/scsi/lun.o 00:22:46.806 CC lib/scsi/port.o 00:22:46.806 CC lib/nvmf/subsystem.o 00:22:46.806 CC lib/nvmf/nvmf.o 00:22:46.806 CC lib/nbd/nbd_rpc.o 00:22:46.806 CC lib/nvmf/nvmf_rpc.o 00:22:46.806 CC lib/ftl/ftl_sb.o 00:22:47.064 CC lib/scsi/scsi.o 00:22:47.064 LIB libspdk_nbd.a 00:22:47.064 SO libspdk_nbd.so.7.0 00:22:47.064 LIB libspdk_ublk.a 00:22:47.064 CC lib/ftl/ftl_l2p.o 00:22:47.064 SO libspdk_ublk.so.3.0 00:22:47.064 SYMLINK libspdk_nbd.so 00:22:47.064 CC lib/scsi/scsi_bdev.o 00:22:47.064 CC lib/scsi/scsi_pr.o 00:22:47.322 CC lib/scsi/scsi_rpc.o 00:22:47.322 SYMLINK libspdk_ublk.so 00:22:47.322 CC lib/scsi/task.o 00:22:47.322 CC lib/nvmf/transport.o 00:22:47.322 CC lib/nvmf/tcp.o 00:22:47.322 CC lib/ftl/ftl_l2p_flat.o 00:22:47.581 CC lib/nvmf/stubs.o 00:22:47.581 CC lib/nvmf/mdns_server.o 00:22:47.840 CC lib/ftl/ftl_nv_cache.o 00:22:47.840 LIB libspdk_scsi.a 00:22:47.840 SO libspdk_scsi.so.9.0 00:22:47.840 CC lib/nvmf/rdma.o 00:22:47.840 CC lib/nvmf/auth.o 00:22:47.840 SYMLINK libspdk_scsi.so 00:22:47.840 CC lib/ftl/ftl_band.o 00:22:48.097 CC lib/ftl/ftl_band_ops.o 00:22:48.097 CC lib/ftl/ftl_writer.o 00:22:48.356 CC lib/ftl/ftl_rq.o 00:22:48.356 CC lib/ftl/ftl_reloc.o 00:22:48.615 CC lib/ftl/ftl_l2p_cache.o 00:22:48.615 CC lib/ftl/ftl_p2l.o 00:22:48.615 CC lib/iscsi/conn.o 00:22:48.615 CC lib/ftl/ftl_p2l_log.o 00:22:48.615 CC lib/vhost/vhost.o 00:22:48.873 CC lib/vhost/vhost_rpc.o 00:22:48.873 CC lib/ftl/mngt/ftl_mngt.o 00:22:48.873 CC lib/iscsi/init_grp.o 00:22:48.873 CC lib/iscsi/iscsi.o 00:22:49.131 CC lib/iscsi/param.o 00:22:49.131 CC lib/iscsi/portal_grp.o 00:22:49.131 CC lib/iscsi/tgt_node.o 00:22:49.391 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:22:49.391 CC lib/vhost/vhost_scsi.o 00:22:49.391 CC lib/iscsi/iscsi_subsystem.o 00:22:49.391 CC lib/iscsi/iscsi_rpc.o 00:22:49.391 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:22:49.391 CC lib/iscsi/task.o 00:22:49.650 CC lib/vhost/vhost_blk.o 00:22:49.650 CC lib/ftl/mngt/ftl_mngt_startup.o 00:22:49.650 CC lib/ftl/mngt/ftl_mngt_md.o 00:22:49.908 CC lib/ftl/mngt/ftl_mngt_misc.o 00:22:49.908 CC lib/vhost/rte_vhost_user.o 00:22:49.908 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:22:49.908 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:22:49.908 CC lib/ftl/mngt/ftl_mngt_band.o 00:22:50.168 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:22:50.168 LIB libspdk_nvmf.a 00:22:50.168 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:22:50.168 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:22:50.168 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:22:50.168 SO libspdk_nvmf.so.20.0 00:22:50.168 CC lib/ftl/utils/ftl_conf.o 00:22:50.447 CC lib/ftl/utils/ftl_md.o 00:22:50.447 CC lib/ftl/utils/ftl_mempool.o 00:22:50.447 CC lib/ftl/utils/ftl_bitmap.o 00:22:50.447 SYMLINK libspdk_nvmf.so 00:22:50.447 CC lib/ftl/utils/ftl_property.o 00:22:50.447 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:22:50.447 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:22:50.447 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:22:50.706 LIB libspdk_iscsi.a 00:22:50.706 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:22:50.706 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:22:50.706 SO libspdk_iscsi.so.8.0 00:22:50.706 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:22:50.706 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:22:50.706 CC lib/ftl/upgrade/ftl_sb_v3.o 00:22:50.706 CC lib/ftl/upgrade/ftl_sb_v5.o 00:22:50.706 CC lib/ftl/nvc/ftl_nvc_dev.o 00:22:50.706 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:22:50.706 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:22:50.965 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:22:50.965 SYMLINK libspdk_iscsi.so 00:22:50.965 CC lib/ftl/base/ftl_base_dev.o 00:22:50.965 CC lib/ftl/base/ftl_base_bdev.o 00:22:50.965 CC lib/ftl/ftl_trace.o 00:22:51.223 LIB libspdk_vhost.a 00:22:51.223 SO libspdk_vhost.so.8.0 00:22:51.223 LIB libspdk_ftl.a 00:22:51.481 SYMLINK libspdk_vhost.so 00:22:51.481 SO libspdk_ftl.so.9.0 00:22:52.048 SYMLINK libspdk_ftl.so 00:22:52.306 CC module/env_dpdk/env_dpdk_rpc.o 00:22:52.306 CC module/fsdev/aio/fsdev_aio.o 00:22:52.306 CC module/accel/error/accel_error.o 00:22:52.306 CC module/sock/posix/posix.o 00:22:52.306 CC module/accel/ioat/accel_ioat.o 00:22:52.306 CC module/keyring/file/keyring.o 00:22:52.306 CC module/accel/iaa/accel_iaa.o 00:22:52.306 CC module/accel/dsa/accel_dsa.o 00:22:52.306 CC module/blob/bdev/blob_bdev.o 00:22:52.565 CC module/scheduler/dynamic/scheduler_dynamic.o 00:22:52.565 LIB libspdk_env_dpdk_rpc.a 00:22:52.565 SO libspdk_env_dpdk_rpc.so.6.0 00:22:52.565 SYMLINK libspdk_env_dpdk_rpc.so 00:22:52.565 CC module/fsdev/aio/fsdev_aio_rpc.o 00:22:52.565 CC module/keyring/file/keyring_rpc.o 00:22:52.565 CC module/accel/ioat/accel_ioat_rpc.o 00:22:52.565 CC module/accel/error/accel_error_rpc.o 00:22:52.823 LIB libspdk_scheduler_dynamic.a 00:22:52.823 LIB libspdk_blob_bdev.a 00:22:52.823 SO libspdk_scheduler_dynamic.so.4.0 00:22:52.823 CC module/accel/iaa/accel_iaa_rpc.o 00:22:52.823 SO libspdk_blob_bdev.so.11.0 00:22:52.823 LIB libspdk_keyring_file.a 00:22:52.823 CC module/fsdev/aio/linux_aio_mgr.o 00:22:52.823 LIB libspdk_accel_ioat.a 00:22:52.823 SO libspdk_keyring_file.so.2.0 00:22:52.823 SO libspdk_accel_ioat.so.6.0 00:22:52.823 SYMLINK libspdk_scheduler_dynamic.so 00:22:52.823 SYMLINK libspdk_blob_bdev.so 00:22:52.823 LIB libspdk_accel_error.a 00:22:52.823 CC module/accel/dsa/accel_dsa_rpc.o 00:22:52.823 SYMLINK libspdk_keyring_file.so 00:22:52.823 SO libspdk_accel_error.so.2.0 00:22:52.823 SYMLINK libspdk_accel_ioat.so 00:22:52.823 LIB libspdk_accel_iaa.a 00:22:53.081 SO libspdk_accel_iaa.so.3.0 00:22:53.081 SYMLINK libspdk_accel_error.so 00:22:53.081 SYMLINK libspdk_accel_iaa.so 00:22:53.081 LIB libspdk_accel_dsa.a 00:22:53.081 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:22:53.081 SO libspdk_accel_dsa.so.5.0 00:22:53.081 CC module/keyring/linux/keyring.o 00:22:53.081 CC module/bdev/delay/vbdev_delay.o 00:22:53.081 CC module/bdev/error/vbdev_error.o 00:22:53.341 LIB libspdk_fsdev_aio.a 00:22:53.341 SYMLINK libspdk_accel_dsa.so 00:22:53.341 CC module/bdev/error/vbdev_error_rpc.o 00:22:53.341 SO libspdk_fsdev_aio.so.1.0 00:22:53.341 CC module/blobfs/bdev/blobfs_bdev.o 00:22:53.341 CC module/scheduler/gscheduler/gscheduler.o 00:22:53.341 LIB libspdk_scheduler_dpdk_governor.a 00:22:53.341 LIB libspdk_sock_posix.a 00:22:53.341 CC module/bdev/gpt/gpt.o 00:22:53.341 SO libspdk_scheduler_dpdk_governor.so.4.0 00:22:53.341 SO libspdk_sock_posix.so.6.0 00:22:53.341 CC module/keyring/linux/keyring_rpc.o 00:22:53.341 SYMLINK libspdk_fsdev_aio.so 00:22:53.341 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:22:53.341 SYMLINK libspdk_scheduler_dpdk_governor.so 00:22:53.341 SYMLINK libspdk_sock_posix.so 00:22:53.600 LIB libspdk_scheduler_gscheduler.a 00:22:53.600 LIB libspdk_keyring_linux.a 00:22:53.600 CC module/bdev/gpt/vbdev_gpt.o 00:22:53.600 SO libspdk_scheduler_gscheduler.so.4.0 00:22:53.600 SO libspdk_keyring_linux.so.1.0 00:22:53.600 LIB libspdk_blobfs_bdev.a 00:22:53.600 CC module/bdev/lvol/vbdev_lvol.o 00:22:53.600 CC module/bdev/delay/vbdev_delay_rpc.o 00:22:53.600 SYMLINK libspdk_scheduler_gscheduler.so 00:22:53.600 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:22:53.600 SO libspdk_blobfs_bdev.so.6.0 00:22:53.600 CC module/bdev/malloc/bdev_malloc.o 00:22:53.600 SYMLINK libspdk_keyring_linux.so 00:22:53.600 LIB libspdk_bdev_error.a 00:22:53.600 CC module/bdev/null/bdev_null.o 00:22:53.859 SYMLINK libspdk_blobfs_bdev.so 00:22:53.859 SO libspdk_bdev_error.so.6.0 00:22:53.859 CC module/bdev/malloc/bdev_malloc_rpc.o 00:22:53.859 CC module/bdev/nvme/bdev_nvme.o 00:22:53.859 SYMLINK libspdk_bdev_error.so 00:22:53.859 LIB libspdk_bdev_gpt.a 00:22:53.859 CC module/bdev/passthru/vbdev_passthru.o 00:22:53.859 SO libspdk_bdev_gpt.so.6.0 00:22:53.859 LIB libspdk_bdev_delay.a 00:22:53.859 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:22:54.117 SYMLINK libspdk_bdev_gpt.so 00:22:54.117 CC module/bdev/nvme/bdev_nvme_rpc.o 00:22:54.117 SO libspdk_bdev_delay.so.6.0 00:22:54.117 CC module/bdev/nvme/nvme_rpc.o 00:22:54.117 CC module/bdev/null/bdev_null_rpc.o 00:22:54.117 CC module/bdev/raid/bdev_raid.o 00:22:54.117 SYMLINK libspdk_bdev_delay.so 00:22:54.117 CC module/bdev/raid/bdev_raid_rpc.o 00:22:54.117 CC module/bdev/raid/bdev_raid_sb.o 00:22:54.376 LIB libspdk_bdev_malloc.a 00:22:54.376 LIB libspdk_bdev_passthru.a 00:22:54.376 LIB libspdk_bdev_null.a 00:22:54.376 SO libspdk_bdev_malloc.so.6.0 00:22:54.376 SO libspdk_bdev_passthru.so.6.0 00:22:54.376 CC module/bdev/nvme/bdev_mdns_client.o 00:22:54.376 SO libspdk_bdev_null.so.6.0 00:22:54.376 SYMLINK libspdk_bdev_passthru.so 00:22:54.376 SYMLINK libspdk_bdev_malloc.so 00:22:54.376 CC module/bdev/nvme/vbdev_opal.o 00:22:54.376 CC module/bdev/raid/raid0.o 00:22:54.376 CC module/bdev/nvme/vbdev_opal_rpc.o 00:22:54.376 SYMLINK libspdk_bdev_null.so 00:22:54.376 LIB libspdk_bdev_lvol.a 00:22:54.376 SO libspdk_bdev_lvol.so.6.0 00:22:54.636 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:22:54.636 SYMLINK libspdk_bdev_lvol.so 00:22:54.636 CC module/bdev/raid/raid1.o 00:22:54.636 CC module/bdev/raid/concat.o 00:22:54.636 CC module/bdev/split/vbdev_split.o 00:22:54.636 CC module/bdev/split/vbdev_split_rpc.o 00:22:54.636 CC module/bdev/zone_block/vbdev_zone_block.o 00:22:54.895 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:22:54.895 LIB libspdk_bdev_split.a 00:22:54.895 SO libspdk_bdev_split.so.6.0 00:22:54.895 CC module/bdev/aio/bdev_aio.o 00:22:55.153 CC module/bdev/ftl/bdev_ftl.o 00:22:55.153 SYMLINK libspdk_bdev_split.so 00:22:55.153 CC module/bdev/ftl/bdev_ftl_rpc.o 00:22:55.153 CC module/bdev/aio/bdev_aio_rpc.o 00:22:55.153 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:22:55.153 CC module/bdev/iscsi/bdev_iscsi.o 00:22:55.153 LIB libspdk_bdev_zone_block.a 00:22:55.153 CC module/bdev/virtio/bdev_virtio_scsi.o 00:22:55.153 SO libspdk_bdev_zone_block.so.6.0 00:22:55.411 SYMLINK libspdk_bdev_zone_block.so 00:22:55.411 CC module/bdev/virtio/bdev_virtio_rpc.o 00:22:55.411 CC module/bdev/virtio/bdev_virtio_blk.o 00:22:55.411 LIB libspdk_bdev_raid.a 00:22:55.411 LIB libspdk_bdev_ftl.a 00:22:55.411 LIB libspdk_bdev_aio.a 00:22:55.411 SO libspdk_bdev_ftl.so.6.0 00:22:55.411 SO libspdk_bdev_raid.so.6.0 00:22:55.411 SO libspdk_bdev_aio.so.6.0 00:22:55.411 SYMLINK libspdk_bdev_ftl.so 00:22:55.411 SYMLINK libspdk_bdev_aio.so 00:22:55.670 SYMLINK libspdk_bdev_raid.so 00:22:55.670 LIB libspdk_bdev_iscsi.a 00:22:55.670 SO libspdk_bdev_iscsi.so.6.0 00:22:55.670 SYMLINK libspdk_bdev_iscsi.so 00:22:55.928 LIB libspdk_bdev_virtio.a 00:22:55.928 SO libspdk_bdev_virtio.so.6.0 00:22:55.928 SYMLINK libspdk_bdev_virtio.so 00:22:56.864 LIB libspdk_bdev_nvme.a 00:22:57.124 SO libspdk_bdev_nvme.so.7.1 00:22:57.124 SYMLINK libspdk_bdev_nvme.so 00:22:57.691 CC module/event/subsystems/sock/sock.o 00:22:57.691 CC module/event/subsystems/fsdev/fsdev.o 00:22:57.691 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:22:57.691 CC module/event/subsystems/vmd/vmd.o 00:22:57.691 CC module/event/subsystems/vmd/vmd_rpc.o 00:22:57.691 CC module/event/subsystems/scheduler/scheduler.o 00:22:57.691 CC module/event/subsystems/keyring/keyring.o 00:22:57.691 CC module/event/subsystems/iobuf/iobuf.o 00:22:57.691 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:22:57.951 LIB libspdk_event_fsdev.a 00:22:57.951 SO libspdk_event_fsdev.so.1.0 00:22:57.951 LIB libspdk_event_keyring.a 00:22:57.951 LIB libspdk_event_scheduler.a 00:22:57.951 LIB libspdk_event_vhost_blk.a 00:22:57.951 LIB libspdk_event_sock.a 00:22:57.951 SYMLINK libspdk_event_fsdev.so 00:22:57.951 LIB libspdk_event_vmd.a 00:22:57.951 SO libspdk_event_keyring.so.1.0 00:22:57.951 SO libspdk_event_scheduler.so.4.0 00:22:57.951 SO libspdk_event_vhost_blk.so.3.0 00:22:57.951 SO libspdk_event_sock.so.5.0 00:22:57.951 LIB libspdk_event_iobuf.a 00:22:57.951 SO libspdk_event_vmd.so.6.0 00:22:57.951 SYMLINK libspdk_event_scheduler.so 00:22:57.951 SO libspdk_event_iobuf.so.3.0 00:22:57.951 SYMLINK libspdk_event_vhost_blk.so 00:22:57.951 SYMLINK libspdk_event_sock.so 00:22:57.951 SYMLINK libspdk_event_keyring.so 00:22:57.951 SYMLINK libspdk_event_vmd.so 00:22:57.951 SYMLINK libspdk_event_iobuf.so 00:22:58.519 CC module/event/subsystems/accel/accel.o 00:22:58.519 LIB libspdk_event_accel.a 00:22:58.519 SO libspdk_event_accel.so.6.0 00:22:58.778 SYMLINK libspdk_event_accel.so 00:22:59.036 CC module/event/subsystems/bdev/bdev.o 00:22:59.295 LIB libspdk_event_bdev.a 00:22:59.295 SO libspdk_event_bdev.so.6.0 00:22:59.295 SYMLINK libspdk_event_bdev.so 00:22:59.553 CC module/event/subsystems/scsi/scsi.o 00:22:59.553 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:22:59.553 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:22:59.553 CC module/event/subsystems/ublk/ublk.o 00:22:59.553 CC module/event/subsystems/nbd/nbd.o 00:22:59.812 LIB libspdk_event_ublk.a 00:22:59.812 LIB libspdk_event_scsi.a 00:22:59.812 LIB libspdk_event_nbd.a 00:22:59.812 SO libspdk_event_scsi.so.6.0 00:22:59.812 SO libspdk_event_ublk.so.3.0 00:22:59.812 SO libspdk_event_nbd.so.6.0 00:22:59.812 LIB libspdk_event_nvmf.a 00:22:59.812 SYMLINK libspdk_event_nbd.so 00:22:59.812 SYMLINK libspdk_event_ublk.so 00:23:00.071 SO libspdk_event_nvmf.so.6.0 00:23:00.071 SYMLINK libspdk_event_scsi.so 00:23:00.071 SYMLINK libspdk_event_nvmf.so 00:23:00.329 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:23:00.329 CC module/event/subsystems/iscsi/iscsi.o 00:23:00.587 LIB libspdk_event_vhost_scsi.a 00:23:00.587 SO libspdk_event_vhost_scsi.so.3.0 00:23:00.587 LIB libspdk_event_iscsi.a 00:23:00.587 SO libspdk_event_iscsi.so.6.0 00:23:00.587 SYMLINK libspdk_event_vhost_scsi.so 00:23:00.587 SYMLINK libspdk_event_iscsi.so 00:23:00.913 SO libspdk.so.6.0 00:23:00.913 SYMLINK libspdk.so 00:23:01.171 TEST_HEADER include/spdk/accel.h 00:23:01.171 TEST_HEADER include/spdk/accel_module.h 00:23:01.171 TEST_HEADER include/spdk/assert.h 00:23:01.171 CXX app/trace/trace.o 00:23:01.171 TEST_HEADER include/spdk/barrier.h 00:23:01.171 TEST_HEADER include/spdk/base64.h 00:23:01.171 TEST_HEADER include/spdk/bdev.h 00:23:01.171 CC test/rpc_client/rpc_client_test.o 00:23:01.171 TEST_HEADER include/spdk/bdev_module.h 00:23:01.171 TEST_HEADER include/spdk/bdev_zone.h 00:23:01.171 TEST_HEADER include/spdk/bit_array.h 00:23:01.171 TEST_HEADER include/spdk/bit_pool.h 00:23:01.171 TEST_HEADER include/spdk/blob_bdev.h 00:23:01.171 TEST_HEADER include/spdk/blobfs_bdev.h 00:23:01.171 TEST_HEADER include/spdk/blobfs.h 00:23:01.171 TEST_HEADER include/spdk/blob.h 00:23:01.171 TEST_HEADER include/spdk/conf.h 00:23:01.171 TEST_HEADER include/spdk/config.h 00:23:01.171 TEST_HEADER include/spdk/cpuset.h 00:23:01.171 TEST_HEADER include/spdk/crc16.h 00:23:01.171 TEST_HEADER include/spdk/crc32.h 00:23:01.171 TEST_HEADER include/spdk/crc64.h 00:23:01.171 TEST_HEADER include/spdk/dif.h 00:23:01.171 TEST_HEADER include/spdk/dma.h 00:23:01.171 TEST_HEADER include/spdk/endian.h 00:23:01.171 TEST_HEADER include/spdk/env_dpdk.h 00:23:01.171 TEST_HEADER include/spdk/env.h 00:23:01.171 TEST_HEADER include/spdk/event.h 00:23:01.171 TEST_HEADER include/spdk/fd_group.h 00:23:01.171 TEST_HEADER include/spdk/fd.h 00:23:01.171 TEST_HEADER include/spdk/file.h 00:23:01.171 TEST_HEADER include/spdk/fsdev.h 00:23:01.171 TEST_HEADER include/spdk/fsdev_module.h 00:23:01.171 TEST_HEADER include/spdk/ftl.h 00:23:01.171 TEST_HEADER include/spdk/fuse_dispatcher.h 00:23:01.171 TEST_HEADER include/spdk/gpt_spec.h 00:23:01.171 TEST_HEADER include/spdk/hexlify.h 00:23:01.171 CC examples/util/zipf/zipf.o 00:23:01.171 TEST_HEADER include/spdk/histogram_data.h 00:23:01.171 TEST_HEADER include/spdk/idxd.h 00:23:01.171 TEST_HEADER include/spdk/idxd_spec.h 00:23:01.171 TEST_HEADER include/spdk/init.h 00:23:01.171 TEST_HEADER include/spdk/ioat.h 00:23:01.171 TEST_HEADER include/spdk/ioat_spec.h 00:23:01.171 TEST_HEADER include/spdk/iscsi_spec.h 00:23:01.171 TEST_HEADER include/spdk/json.h 00:23:01.171 TEST_HEADER include/spdk/jsonrpc.h 00:23:01.171 TEST_HEADER include/spdk/keyring.h 00:23:01.171 CC examples/ioat/perf/perf.o 00:23:01.171 CC test/dma/test_dma/test_dma.o 00:23:01.171 TEST_HEADER include/spdk/keyring_module.h 00:23:01.171 TEST_HEADER include/spdk/likely.h 00:23:01.430 TEST_HEADER include/spdk/log.h 00:23:01.430 CC test/thread/poller_perf/poller_perf.o 00:23:01.430 CC test/app/bdev_svc/bdev_svc.o 00:23:01.430 TEST_HEADER include/spdk/lvol.h 00:23:01.430 TEST_HEADER include/spdk/md5.h 00:23:01.430 TEST_HEADER include/spdk/memory.h 00:23:01.430 TEST_HEADER include/spdk/mmio.h 00:23:01.430 TEST_HEADER include/spdk/nbd.h 00:23:01.430 TEST_HEADER include/spdk/net.h 00:23:01.430 TEST_HEADER include/spdk/notify.h 00:23:01.430 TEST_HEADER include/spdk/nvme.h 00:23:01.430 TEST_HEADER include/spdk/nvme_intel.h 00:23:01.430 TEST_HEADER include/spdk/nvme_ocssd.h 00:23:01.430 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:23:01.430 TEST_HEADER include/spdk/nvme_spec.h 00:23:01.430 TEST_HEADER include/spdk/nvme_zns.h 00:23:01.430 TEST_HEADER include/spdk/nvmf_cmd.h 00:23:01.430 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:23:01.430 TEST_HEADER include/spdk/nvmf.h 00:23:01.430 TEST_HEADER include/spdk/nvmf_spec.h 00:23:01.430 TEST_HEADER include/spdk/nvmf_transport.h 00:23:01.430 TEST_HEADER include/spdk/opal.h 00:23:01.430 TEST_HEADER include/spdk/opal_spec.h 00:23:01.430 TEST_HEADER include/spdk/pci_ids.h 00:23:01.430 TEST_HEADER include/spdk/pipe.h 00:23:01.430 LINK rpc_client_test 00:23:01.430 TEST_HEADER include/spdk/queue.h 00:23:01.430 TEST_HEADER include/spdk/reduce.h 00:23:01.430 TEST_HEADER include/spdk/rpc.h 00:23:01.430 TEST_HEADER include/spdk/scheduler.h 00:23:01.430 TEST_HEADER include/spdk/scsi.h 00:23:01.430 TEST_HEADER include/spdk/scsi_spec.h 00:23:01.430 CC test/env/mem_callbacks/mem_callbacks.o 00:23:01.430 TEST_HEADER include/spdk/sock.h 00:23:01.430 TEST_HEADER include/spdk/stdinc.h 00:23:01.430 TEST_HEADER include/spdk/string.h 00:23:01.430 TEST_HEADER include/spdk/thread.h 00:23:01.430 TEST_HEADER include/spdk/trace.h 00:23:01.430 TEST_HEADER include/spdk/trace_parser.h 00:23:01.430 TEST_HEADER include/spdk/tree.h 00:23:01.430 TEST_HEADER include/spdk/ublk.h 00:23:01.430 TEST_HEADER include/spdk/util.h 00:23:01.431 TEST_HEADER include/spdk/uuid.h 00:23:01.431 TEST_HEADER include/spdk/version.h 00:23:01.431 TEST_HEADER include/spdk/vfio_user_pci.h 00:23:01.431 TEST_HEADER include/spdk/vfio_user_spec.h 00:23:01.431 TEST_HEADER include/spdk/vhost.h 00:23:01.431 TEST_HEADER include/spdk/vmd.h 00:23:01.431 TEST_HEADER include/spdk/xor.h 00:23:01.431 TEST_HEADER include/spdk/zipf.h 00:23:01.431 CXX test/cpp_headers/accel.o 00:23:01.688 LINK zipf 00:23:01.688 LINK poller_perf 00:23:01.688 CXX test/cpp_headers/accel_module.o 00:23:01.688 LINK bdev_svc 00:23:01.688 LINK ioat_perf 00:23:01.688 LINK spdk_trace 00:23:01.688 CC app/trace_record/trace_record.o 00:23:01.946 LINK test_dma 00:23:01.946 CXX test/cpp_headers/assert.o 00:23:01.946 CC test/env/vtophys/vtophys.o 00:23:01.946 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:23:01.946 CC examples/ioat/verify/verify.o 00:23:01.946 LINK mem_callbacks 00:23:02.205 CXX test/cpp_headers/barrier.o 00:23:02.205 LINK spdk_trace_record 00:23:02.205 LINK vtophys 00:23:02.205 CXX test/cpp_headers/base64.o 00:23:02.205 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:23:02.205 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:23:02.205 LINK env_dpdk_post_init 00:23:02.205 CXX test/cpp_headers/bdev.o 00:23:02.205 LINK verify 00:23:02.463 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:23:02.463 CC app/nvmf_tgt/nvmf_main.o 00:23:02.722 CXX test/cpp_headers/bdev_module.o 00:23:02.722 CC test/event/event_perf/event_perf.o 00:23:02.722 CC test/app/histogram_perf/histogram_perf.o 00:23:02.722 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:23:02.722 CC test/env/memory/memory_ut.o 00:23:02.722 LINK nvme_fuzz 00:23:02.980 LINK nvmf_tgt 00:23:02.980 CC test/nvme/aer/aer.o 00:23:02.980 LINK event_perf 00:23:02.980 LINK histogram_perf 00:23:02.980 CXX test/cpp_headers/bdev_zone.o 00:23:02.980 CXX test/cpp_headers/bit_array.o 00:23:03.237 CXX test/cpp_headers/bit_pool.o 00:23:03.237 CC test/event/reactor/reactor.o 00:23:03.237 LINK vhost_fuzz 00:23:03.237 LINK aer 00:23:03.495 CXX test/cpp_headers/blob_bdev.o 00:23:03.495 CC examples/interrupt_tgt/interrupt_tgt.o 00:23:03.495 CC app/iscsi_tgt/iscsi_tgt.o 00:23:03.495 LINK reactor 00:23:03.495 CC test/nvme/reset/reset.o 00:23:03.495 CXX test/cpp_headers/blobfs_bdev.o 00:23:03.773 CXX test/cpp_headers/blobfs.o 00:23:03.773 LINK interrupt_tgt 00:23:03.773 LINK iscsi_tgt 00:23:03.773 CXX test/cpp_headers/blob.o 00:23:03.773 CC test/event/reactor_perf/reactor_perf.o 00:23:04.052 LINK reset 00:23:04.052 CC app/spdk_lspci/spdk_lspci.o 00:23:04.052 CC app/spdk_tgt/spdk_tgt.o 00:23:04.052 CXX test/cpp_headers/conf.o 00:23:04.310 LINK reactor_perf 00:23:04.310 CC app/spdk_nvme_perf/perf.o 00:23:04.310 CC test/app/jsoncat/jsoncat.o 00:23:04.310 CC test/nvme/sgl/sgl.o 00:23:04.568 LINK spdk_lspci 00:23:04.568 CXX test/cpp_headers/config.o 00:23:04.568 LINK spdk_tgt 00:23:04.568 LINK iscsi_fuzz 00:23:04.568 CXX test/cpp_headers/cpuset.o 00:23:04.568 LINK jsoncat 00:23:04.826 LINK memory_ut 00:23:04.826 LINK sgl 00:23:04.826 CC test/event/app_repeat/app_repeat.o 00:23:04.826 CXX test/cpp_headers/crc16.o 00:23:05.084 CC test/app/stub/stub.o 00:23:05.084 CC test/env/pci/pci_ut.o 00:23:05.084 LINK app_repeat 00:23:05.084 CXX test/cpp_headers/crc32.o 00:23:05.084 CC app/spdk_nvme_identify/identify.o 00:23:05.342 CC app/spdk_top/spdk_top.o 00:23:05.342 CC test/nvme/e2edp/nvme_dp.o 00:23:05.342 CC app/spdk_nvme_discover/discovery_aer.o 00:23:05.342 LINK stub 00:23:05.342 CXX test/cpp_headers/crc64.o 00:23:05.600 LINK nvme_dp 00:23:05.600 LINK spdk_nvme_discover 00:23:05.600 CXX test/cpp_headers/dif.o 00:23:05.600 CXX test/cpp_headers/dma.o 00:23:05.600 LINK pci_ut 00:23:05.600 CC test/event/scheduler/scheduler.o 00:23:05.859 LINK spdk_nvme_perf 00:23:05.859 CC test/nvme/overhead/overhead.o 00:23:05.859 CXX test/cpp_headers/endian.o 00:23:05.859 CC test/nvme/err_injection/err_injection.o 00:23:06.118 CXX test/cpp_headers/env_dpdk.o 00:23:06.118 LINK err_injection 00:23:06.118 LINK scheduler 00:23:06.118 LINK spdk_top 00:23:06.377 CC test/nvme/startup/startup.o 00:23:06.377 CC test/accel/dif/dif.o 00:23:06.377 CC test/nvme/reserve/reserve.o 00:23:06.377 LINK overhead 00:23:06.377 CXX test/cpp_headers/env.o 00:23:06.635 CXX test/cpp_headers/event.o 00:23:06.635 CXX test/cpp_headers/fd_group.o 00:23:06.635 LINK spdk_nvme_identify 00:23:06.635 LINK startup 00:23:06.635 LINK reserve 00:23:06.893 CXX test/cpp_headers/fd.o 00:23:06.893 CC test/nvme/simple_copy/simple_copy.o 00:23:06.893 CC test/blobfs/mkfs/mkfs.o 00:23:07.151 CC examples/thread/thread/thread_ex.o 00:23:07.151 CXX test/cpp_headers/file.o 00:23:07.151 CC app/spdk_dd/spdk_dd.o 00:23:07.151 CC app/vhost/vhost.o 00:23:07.151 LINK dif 00:23:07.151 CC examples/sock/hello_world/hello_sock.o 00:23:07.151 LINK simple_copy 00:23:07.410 LINK mkfs 00:23:07.410 CC app/fio/nvme/fio_plugin.o 00:23:07.410 LINK vhost 00:23:07.410 CXX test/cpp_headers/fsdev.o 00:23:07.410 LINK thread 00:23:07.410 CXX test/cpp_headers/fsdev_module.o 00:23:07.410 LINK hello_sock 00:23:07.667 CXX test/cpp_headers/ftl.o 00:23:07.667 LINK spdk_dd 00:23:07.667 CXX test/cpp_headers/fuse_dispatcher.o 00:23:07.667 CC test/nvme/connect_stress/connect_stress.o 00:23:07.925 CC test/nvme/boot_partition/boot_partition.o 00:23:07.925 CXX test/cpp_headers/gpt_spec.o 00:23:07.925 CC test/nvme/compliance/nvme_compliance.o 00:23:07.925 CC app/fio/bdev/fio_plugin.o 00:23:07.925 CC examples/vmd/lsvmd/lsvmd.o 00:23:07.925 LINK spdk_nvme 00:23:07.925 CC examples/vmd/led/led.o 00:23:07.925 LINK connect_stress 00:23:07.925 LINK boot_partition 00:23:08.183 CXX test/cpp_headers/hexlify.o 00:23:08.183 LINK lsvmd 00:23:08.183 LINK led 00:23:08.183 CC test/nvme/fused_ordering/fused_ordering.o 00:23:08.183 LINK nvme_compliance 00:23:08.183 CXX test/cpp_headers/histogram_data.o 00:23:08.183 CC test/lvol/esnap/esnap.o 00:23:08.441 CXX test/cpp_headers/idxd.o 00:23:08.441 CC test/nvme/doorbell_aers/doorbell_aers.o 00:23:08.441 LINK fused_ordering 00:23:08.441 CC examples/idxd/perf/perf.o 00:23:08.441 LINK spdk_bdev 00:23:08.441 CC test/bdev/bdevio/bdevio.o 00:23:08.700 CXX test/cpp_headers/idxd_spec.o 00:23:08.700 CC test/nvme/fdp/fdp.o 00:23:08.700 LINK doorbell_aers 00:23:08.700 CC examples/fsdev/hello_world/hello_fsdev.o 00:23:08.700 CC test/nvme/cuse/cuse.o 00:23:08.700 CXX test/cpp_headers/init.o 00:23:08.959 CXX test/cpp_headers/ioat.o 00:23:08.959 LINK idxd_perf 00:23:08.959 LINK bdevio 00:23:08.959 CC examples/accel/perf/accel_perf.o 00:23:08.959 LINK fdp 00:23:08.959 CXX test/cpp_headers/ioat_spec.o 00:23:09.218 LINK hello_fsdev 00:23:09.218 CXX test/cpp_headers/iscsi_spec.o 00:23:09.485 CC examples/blob/hello_world/hello_blob.o 00:23:09.485 CC examples/nvme/hello_world/hello_world.o 00:23:09.485 CC examples/nvme/reconnect/reconnect.o 00:23:09.485 CXX test/cpp_headers/json.o 00:23:09.485 CC examples/blob/cli/blobcli.o 00:23:09.485 LINK accel_perf 00:23:09.755 CC examples/nvme/nvme_manage/nvme_manage.o 00:23:09.755 CXX test/cpp_headers/jsonrpc.o 00:23:09.755 LINK hello_blob 00:23:09.755 LINK hello_world 00:23:09.755 CXX test/cpp_headers/keyring.o 00:23:09.755 CC examples/nvme/arbitration/arbitration.o 00:23:10.013 LINK reconnect 00:23:10.013 LINK blobcli 00:23:10.013 CXX test/cpp_headers/keyring_module.o 00:23:10.013 CC examples/nvme/hotplug/hotplug.o 00:23:10.013 CC examples/nvme/cmb_copy/cmb_copy.o 00:23:10.271 CC examples/nvme/abort/abort.o 00:23:10.271 CXX test/cpp_headers/likely.o 00:23:10.271 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:23:10.271 LINK arbitration 00:23:10.271 LINK hotplug 00:23:10.271 LINK cuse 00:23:10.271 LINK nvme_manage 00:23:10.530 LINK cmb_copy 00:23:10.530 CXX test/cpp_headers/log.o 00:23:10.530 CC examples/bdev/hello_world/hello_bdev.o 00:23:10.530 CXX test/cpp_headers/lvol.o 00:23:10.530 LINK pmr_persistence 00:23:10.530 LINK abort 00:23:10.530 CXX test/cpp_headers/md5.o 00:23:10.530 CXX test/cpp_headers/memory.o 00:23:10.530 CXX test/cpp_headers/mmio.o 00:23:10.789 CXX test/cpp_headers/nbd.o 00:23:10.789 CXX test/cpp_headers/net.o 00:23:10.789 CXX test/cpp_headers/notify.o 00:23:10.789 CC examples/bdev/bdevperf/bdevperf.o 00:23:10.789 CXX test/cpp_headers/nvme.o 00:23:10.789 CXX test/cpp_headers/nvme_intel.o 00:23:10.789 CXX test/cpp_headers/nvme_ocssd.o 00:23:10.789 CXX test/cpp_headers/nvme_ocssd_spec.o 00:23:10.789 LINK hello_bdev 00:23:11.049 CXX test/cpp_headers/nvme_spec.o 00:23:11.049 CXX test/cpp_headers/nvme_zns.o 00:23:11.049 CXX test/cpp_headers/nvmf_cmd.o 00:23:11.049 CXX test/cpp_headers/nvmf_fc_spec.o 00:23:11.049 CXX test/cpp_headers/nvmf.o 00:23:11.049 CXX test/cpp_headers/nvmf_spec.o 00:23:11.049 CXX test/cpp_headers/nvmf_transport.o 00:23:11.049 CXX test/cpp_headers/opal.o 00:23:11.306 CXX test/cpp_headers/opal_spec.o 00:23:11.307 CXX test/cpp_headers/pci_ids.o 00:23:11.307 CXX test/cpp_headers/pipe.o 00:23:11.307 CXX test/cpp_headers/queue.o 00:23:11.307 CXX test/cpp_headers/reduce.o 00:23:11.307 CXX test/cpp_headers/rpc.o 00:23:11.307 CXX test/cpp_headers/scheduler.o 00:23:11.307 CXX test/cpp_headers/scsi.o 00:23:11.307 CXX test/cpp_headers/scsi_spec.o 00:23:11.307 CXX test/cpp_headers/sock.o 00:23:11.307 CXX test/cpp_headers/stdinc.o 00:23:11.307 CXX test/cpp_headers/string.o 00:23:11.565 CXX test/cpp_headers/thread.o 00:23:11.565 CXX test/cpp_headers/trace.o 00:23:11.565 CXX test/cpp_headers/trace_parser.o 00:23:11.565 CXX test/cpp_headers/tree.o 00:23:11.565 CXX test/cpp_headers/ublk.o 00:23:11.565 CXX test/cpp_headers/util.o 00:23:11.565 LINK bdevperf 00:23:11.565 CXX test/cpp_headers/uuid.o 00:23:11.565 CXX test/cpp_headers/version.o 00:23:11.565 CXX test/cpp_headers/vfio_user_pci.o 00:23:11.565 CXX test/cpp_headers/vfio_user_spec.o 00:23:11.565 CXX test/cpp_headers/vhost.o 00:23:11.565 CXX test/cpp_headers/vmd.o 00:23:11.824 CXX test/cpp_headers/xor.o 00:23:11.824 CXX test/cpp_headers/zipf.o 00:23:12.082 CC examples/nvmf/nvmf/nvmf.o 00:23:12.649 LINK nvmf 00:23:14.607 LINK esnap 00:23:14.607 00:23:14.607 real 1m45.620s 00:23:14.607 user 9m35.263s 00:23:14.607 sys 2m16.585s 00:23:14.607 05:34:34 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:23:14.607 ************************************ 00:23:14.607 END TEST make 00:23:14.607 05:34:34 make -- common/autotest_common.sh@10 -- $ set +x 00:23:14.607 ************************************ 00:23:14.899 05:34:34 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:23:14.899 05:34:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:23:14.899 05:34:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:23:14.899 05:34:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:14.899 05:34:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:23:14.899 05:34:34 -- pm/common@44 -- $ pid=5308 00:23:14.899 05:34:34 -- pm/common@50 -- $ kill -TERM 5308 00:23:14.899 05:34:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:14.899 05:34:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:23:14.899 05:34:34 -- pm/common@44 -- $ pid=5309 00:23:14.899 05:34:34 -- pm/common@50 -- $ kill -TERM 5309 00:23:14.899 05:34:34 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:23:14.899 05:34:34 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:23:14.899 05:34:34 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:14.899 05:34:34 -- common/autotest_common.sh@1691 -- # lcov --version 00:23:14.899 05:34:34 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:14.899 05:34:34 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:14.899 05:34:34 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.899 05:34:34 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.899 05:34:34 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.899 05:34:34 -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.899 05:34:34 -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.899 05:34:34 -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.899 05:34:34 -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.899 05:34:34 -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.899 05:34:34 -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.899 05:34:34 -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.899 05:34:34 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.899 05:34:34 -- scripts/common.sh@344 -- # case "$op" in 00:23:14.900 05:34:34 -- scripts/common.sh@345 -- # : 1 00:23:14.900 05:34:34 -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.900 05:34:34 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.900 05:34:34 -- scripts/common.sh@365 -- # decimal 1 00:23:14.900 05:34:34 -- scripts/common.sh@353 -- # local d=1 00:23:14.900 05:34:34 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.900 05:34:34 -- scripts/common.sh@355 -- # echo 1 00:23:14.900 05:34:34 -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.900 05:34:34 -- scripts/common.sh@366 -- # decimal 2 00:23:14.900 05:34:34 -- scripts/common.sh@353 -- # local d=2 00:23:14.900 05:34:34 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.900 05:34:34 -- scripts/common.sh@355 -- # echo 2 00:23:14.900 05:34:34 -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.900 05:34:34 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.900 05:34:34 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.900 05:34:34 -- scripts/common.sh@368 -- # return 0 00:23:14.900 05:34:34 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.900 05:34:34 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:14.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.900 --rc genhtml_branch_coverage=1 00:23:14.900 --rc genhtml_function_coverage=1 00:23:14.900 --rc genhtml_legend=1 00:23:14.900 --rc geninfo_all_blocks=1 00:23:14.900 --rc geninfo_unexecuted_blocks=1 00:23:14.900 00:23:14.900 ' 00:23:14.900 05:34:34 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:14.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.900 --rc genhtml_branch_coverage=1 00:23:14.900 --rc genhtml_function_coverage=1 00:23:14.900 --rc genhtml_legend=1 00:23:14.900 --rc geninfo_all_blocks=1 00:23:14.900 --rc geninfo_unexecuted_blocks=1 00:23:14.900 00:23:14.900 ' 00:23:14.900 05:34:34 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:14.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.900 --rc genhtml_branch_coverage=1 00:23:14.900 --rc genhtml_function_coverage=1 00:23:14.900 --rc genhtml_legend=1 00:23:14.900 --rc geninfo_all_blocks=1 00:23:14.900 --rc geninfo_unexecuted_blocks=1 00:23:14.900 00:23:14.900 ' 00:23:14.900 05:34:34 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:14.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.900 --rc genhtml_branch_coverage=1 00:23:14.900 --rc genhtml_function_coverage=1 00:23:14.900 --rc genhtml_legend=1 00:23:14.900 --rc geninfo_all_blocks=1 00:23:14.900 --rc geninfo_unexecuted_blocks=1 00:23:14.900 00:23:14.900 ' 00:23:14.900 05:34:34 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:14.900 05:34:34 -- nvmf/common.sh@7 -- # uname -s 00:23:14.900 05:34:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.900 05:34:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.900 05:34:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.900 05:34:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.900 05:34:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.900 05:34:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.900 05:34:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.900 05:34:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.900 05:34:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.900 05:34:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.900 05:34:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:23:14.900 05:34:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:23:14.900 05:34:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.900 05:34:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.900 05:34:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:14.900 05:34:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.900 05:34:34 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:14.900 05:34:34 -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.900 05:34:34 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.900 05:34:34 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.900 05:34:34 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.900 05:34:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.900 05:34:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.900 05:34:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.900 05:34:34 -- paths/export.sh@5 -- # export PATH 00:23:14.900 05:34:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.900 05:34:34 -- nvmf/common.sh@51 -- # : 0 00:23:14.900 05:34:34 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.900 05:34:34 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.900 05:34:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.900 05:34:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.900 05:34:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.900 05:34:34 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.900 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.900 05:34:34 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.900 05:34:34 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.900 05:34:34 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.900 05:34:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:23:14.900 05:34:34 -- spdk/autotest.sh@32 -- # uname -s 00:23:14.900 05:34:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:23:14.900 05:34:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:23:14.900 05:34:34 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:23:14.900 05:34:34 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:23:14.900 05:34:34 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:23:14.900 05:34:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:23:14.900 05:34:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:23:14.900 05:34:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:23:14.900 05:34:34 -- spdk/autotest.sh@48 -- # udevadm_pid=56297 00:23:14.900 05:34:34 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:23:14.900 05:34:34 -- pm/common@17 -- # local monitor 00:23:14.900 05:34:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:23:14.900 05:34:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:23:14.900 05:34:34 -- pm/common@21 -- # date +%s 00:23:14.900 05:34:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:23:14.900 05:34:34 -- pm/common@25 -- # sleep 1 00:23:14.900 05:34:34 -- pm/common@21 -- # date +%s 00:23:14.900 05:34:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732080874 00:23:14.900 05:34:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732080874 00:23:15.159 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732080874_collect-vmstat.pm.log 00:23:15.159 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732080874_collect-cpu-load.pm.log 00:23:16.095 05:34:35 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:23:16.095 05:34:35 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:23:16.095 05:34:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:16.095 05:34:35 -- common/autotest_common.sh@10 -- # set +x 00:23:16.095 05:34:35 -- spdk/autotest.sh@59 -- # create_test_list 00:23:16.095 05:34:35 -- common/autotest_common.sh@750 -- # xtrace_disable 00:23:16.095 05:34:35 -- common/autotest_common.sh@10 -- # set +x 00:23:16.095 05:34:35 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:23:16.095 05:34:35 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:23:16.095 05:34:35 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:23:16.095 05:34:35 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:23:16.095 05:34:35 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:23:16.095 05:34:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:23:16.095 05:34:35 -- common/autotest_common.sh@1455 -- # uname 00:23:16.095 05:34:35 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:23:16.095 05:34:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:23:16.095 05:34:35 -- common/autotest_common.sh@1475 -- # uname 00:23:16.095 05:34:35 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:23:16.095 05:34:35 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:23:16.095 05:34:35 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:23:16.095 lcov: LCOV version 1.15 00:23:16.095 05:34:35 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:23:34.222 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:23:34.222 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:23:56.154 05:35:13 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:23:56.154 05:35:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:56.154 05:35:13 -- common/autotest_common.sh@10 -- # set +x 00:23:56.154 05:35:13 -- spdk/autotest.sh@78 -- # rm -f 00:23:56.154 05:35:13 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:56.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:56.154 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:56.154 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:56.154 05:35:14 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:23:56.154 05:35:14 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:23:56.154 05:35:14 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:23:56.154 05:35:14 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:23:56.154 05:35:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:23:56.154 05:35:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:23:56.154 05:35:14 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:56.154 05:35:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:56.154 05:35:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:56.154 05:35:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:23:56.154 05:35:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:23:56.154 05:35:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:23:56.154 05:35:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:56.154 05:35:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:56.154 05:35:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:23:56.154 05:35:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:23:56.154 05:35:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:23:56.154 05:35:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:23:56.154 05:35:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:56.154 05:35:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:23:56.154 05:35:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:23:56.154 05:35:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:23:56.154 05:35:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:23:56.154 05:35:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:56.154 05:35:14 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:23:56.154 05:35:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:23:56.154 05:35:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:23:56.154 05:35:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:23:56.154 05:35:14 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:23:56.154 05:35:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:23:56.154 No valid GPT data, bailing 00:23:56.154 05:35:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:56.154 05:35:14 -- scripts/common.sh@394 -- # pt= 00:23:56.154 05:35:14 -- scripts/common.sh@395 -- # return 1 00:23:56.154 05:35:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:23:56.154 1+0 records in 00:23:56.154 1+0 records out 00:23:56.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00425071 s, 247 MB/s 00:23:56.154 05:35:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:23:56.154 05:35:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:23:56.154 05:35:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:23:56.154 05:35:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:23:56.154 05:35:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:23:56.154 No valid GPT data, bailing 00:23:56.154 05:35:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:56.154 05:35:14 -- scripts/common.sh@394 -- # pt= 00:23:56.154 05:35:14 -- scripts/common.sh@395 -- # return 1 00:23:56.154 05:35:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:23:56.154 1+0 records in 00:23:56.154 1+0 records out 00:23:56.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467912 s, 224 MB/s 00:23:56.154 05:35:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:23:56.154 05:35:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:23:56.154 05:35:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:23:56.154 05:35:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:23:56.154 05:35:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:23:56.154 No valid GPT data, bailing 00:23:56.154 05:35:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:23:56.154 05:35:14 -- scripts/common.sh@394 -- # pt= 00:23:56.154 05:35:14 -- scripts/common.sh@395 -- # return 1 00:23:56.154 05:35:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:23:56.154 1+0 records in 00:23:56.154 1+0 records out 00:23:56.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449685 s, 233 MB/s 00:23:56.154 05:35:14 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:23:56.154 05:35:14 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:23:56.154 05:35:14 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:23:56.154 05:35:14 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:23:56.154 05:35:14 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:23:56.154 No valid GPT data, bailing 00:23:56.154 05:35:14 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:23:56.154 05:35:14 -- scripts/common.sh@394 -- # pt= 00:23:56.154 05:35:14 -- scripts/common.sh@395 -- # return 1 00:23:56.154 05:35:14 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:23:56.154 1+0 records in 00:23:56.154 1+0 records out 00:23:56.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00400182 s, 262 MB/s 00:23:56.154 05:35:14 -- spdk/autotest.sh@105 -- # sync 00:23:56.154 05:35:14 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:23:56.154 05:35:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:23:56.154 05:35:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:23:57.089 05:35:16 -- spdk/autotest.sh@111 -- # uname -s 00:23:57.089 05:35:16 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:23:57.089 05:35:16 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:23:57.090 05:35:16 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:23:58.040 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:58.040 Hugepages 00:23:58.040 node hugesize free / total 00:23:58.040 node0 1048576kB 0 / 0 00:23:58.040 node0 2048kB 0 / 0 00:23:58.040 00:23:58.040 Type BDF Vendor Device NUMA Driver Device Block devices 00:23:58.040 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:23:58.040 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:23:58.040 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:23:58.040 05:35:17 -- spdk/autotest.sh@117 -- # uname -s 00:23:58.040 05:35:17 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:23:58.040 05:35:17 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:23:58.040 05:35:17 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:59.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:59.021 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:59.021 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:59.021 05:35:18 -- common/autotest_common.sh@1515 -- # sleep 1 00:24:00.395 05:35:19 -- common/autotest_common.sh@1516 -- # bdfs=() 00:24:00.395 05:35:19 -- common/autotest_common.sh@1516 -- # local bdfs 00:24:00.395 05:35:19 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:24:00.395 05:35:19 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:24:00.395 05:35:19 -- common/autotest_common.sh@1496 -- # bdfs=() 00:24:00.395 05:35:19 -- common/autotest_common.sh@1496 -- # local bdfs 00:24:00.395 05:35:19 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:00.395 05:35:19 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:24:00.395 05:35:19 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:00.395 05:35:19 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:24:00.395 05:35:19 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:00.395 05:35:19 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:00.395 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:00.653 Waiting for block devices as requested 00:24:00.653 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:00.653 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:00.912 05:35:20 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:24:00.912 05:35:20 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:24:00.912 05:35:20 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:24:00.912 05:35:20 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:24:00.912 05:35:20 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:24:00.912 05:35:20 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:24:00.912 05:35:20 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:24:00.912 05:35:20 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:24:00.912 05:35:20 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:24:00.912 05:35:20 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:24:00.912 05:35:20 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:24:00.912 05:35:20 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:24:00.912 05:35:20 -- common/autotest_common.sh@1529 -- # grep oacs 00:24:00.912 05:35:20 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:24:00.912 05:35:20 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:24:00.912 05:35:20 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:24:00.912 05:35:20 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:24:00.912 05:35:20 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:24:00.912 05:35:20 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:24:00.912 05:35:20 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:24:00.912 05:35:20 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:24:00.912 05:35:20 -- common/autotest_common.sh@1541 -- # continue 00:24:00.912 05:35:20 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:24:00.912 05:35:20 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:24:00.912 05:35:20 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:24:00.912 05:35:20 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:24:00.912 05:35:20 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:24:00.912 05:35:20 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:24:00.912 05:35:20 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:24:00.912 05:35:20 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:24:00.912 05:35:20 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:24:00.912 05:35:20 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:24:00.912 05:35:20 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:24:00.912 05:35:20 -- common/autotest_common.sh@1529 -- # grep oacs 00:24:00.912 05:35:20 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:24:00.912 05:35:20 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:24:00.912 05:35:20 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:24:00.912 05:35:20 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:24:00.912 05:35:20 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:24:00.912 05:35:20 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:24:00.912 05:35:20 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:24:00.912 05:35:20 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:24:00.912 05:35:20 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:24:00.912 05:35:20 -- common/autotest_common.sh@1541 -- # continue 00:24:00.912 05:35:20 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:24:00.912 05:35:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:00.912 05:35:20 -- common/autotest_common.sh@10 -- # set +x 00:24:00.912 05:35:20 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:24:00.912 05:35:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:00.912 05:35:20 -- common/autotest_common.sh@10 -- # set +x 00:24:00.912 05:35:20 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:01.847 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:01.847 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:01.847 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:01.847 05:35:21 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:24:01.847 05:35:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:01.847 05:35:21 -- common/autotest_common.sh@10 -- # set +x 00:24:01.847 05:35:21 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:24:01.847 05:35:21 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:24:01.847 05:35:21 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:24:01.847 05:35:21 -- common/autotest_common.sh@1561 -- # bdfs=() 00:24:01.847 05:35:21 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:24:01.847 05:35:21 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:24:01.847 05:35:21 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:24:01.847 05:35:21 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:24:01.847 05:35:21 -- common/autotest_common.sh@1496 -- # bdfs=() 00:24:01.847 05:35:21 -- common/autotest_common.sh@1496 -- # local bdfs 00:24:01.847 05:35:21 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:01.847 05:35:21 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:01.847 05:35:21 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:24:02.105 05:35:21 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:24:02.105 05:35:21 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:02.105 05:35:21 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:24:02.105 05:35:21 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:24:02.105 05:35:21 -- common/autotest_common.sh@1564 -- # device=0x0010 00:24:02.105 05:35:21 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:24:02.105 05:35:21 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:24:02.105 05:35:21 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:24:02.105 05:35:21 -- common/autotest_common.sh@1564 -- # device=0x0010 00:24:02.105 05:35:21 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:24:02.105 05:35:21 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:24:02.105 05:35:21 -- common/autotest_common.sh@1570 -- # return 0 00:24:02.105 05:35:21 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:24:02.105 05:35:21 -- common/autotest_common.sh@1578 -- # return 0 00:24:02.105 05:35:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:24:02.105 05:35:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:24:02.105 05:35:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:24:02.105 05:35:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:24:02.105 05:35:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:24:02.105 05:35:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:02.105 05:35:21 -- common/autotest_common.sh@10 -- # set +x 00:24:02.105 05:35:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:24:02.105 05:35:21 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:24:02.105 05:35:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:02.105 05:35:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:02.105 05:35:21 -- common/autotest_common.sh@10 -- # set +x 00:24:02.105 ************************************ 00:24:02.105 START TEST env 00:24:02.105 ************************************ 00:24:02.105 05:35:21 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:24:02.105 * Looking for test storage... 00:24:02.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:24:02.105 05:35:21 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:02.105 05:35:21 env -- common/autotest_common.sh@1691 -- # lcov --version 00:24:02.105 05:35:21 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:02.363 05:35:22 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:02.363 05:35:22 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:02.363 05:35:22 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:02.363 05:35:22 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:02.363 05:35:22 env -- scripts/common.sh@336 -- # IFS=.-: 00:24:02.363 05:35:22 env -- scripts/common.sh@336 -- # read -ra ver1 00:24:02.363 05:35:22 env -- scripts/common.sh@337 -- # IFS=.-: 00:24:02.363 05:35:22 env -- scripts/common.sh@337 -- # read -ra ver2 00:24:02.363 05:35:22 env -- scripts/common.sh@338 -- # local 'op=<' 00:24:02.363 05:35:22 env -- scripts/common.sh@340 -- # ver1_l=2 00:24:02.363 05:35:22 env -- scripts/common.sh@341 -- # ver2_l=1 00:24:02.363 05:35:22 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:02.363 05:35:22 env -- scripts/common.sh@344 -- # case "$op" in 00:24:02.363 05:35:22 env -- scripts/common.sh@345 -- # : 1 00:24:02.363 05:35:22 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:02.364 05:35:22 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.364 05:35:22 env -- scripts/common.sh@365 -- # decimal 1 00:24:02.364 05:35:22 env -- scripts/common.sh@353 -- # local d=1 00:24:02.364 05:35:22 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:02.364 05:35:22 env -- scripts/common.sh@355 -- # echo 1 00:24:02.364 05:35:22 env -- scripts/common.sh@365 -- # ver1[v]=1 00:24:02.364 05:35:22 env -- scripts/common.sh@366 -- # decimal 2 00:24:02.364 05:35:22 env -- scripts/common.sh@353 -- # local d=2 00:24:02.364 05:35:22 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:02.364 05:35:22 env -- scripts/common.sh@355 -- # echo 2 00:24:02.364 05:35:22 env -- scripts/common.sh@366 -- # ver2[v]=2 00:24:02.364 05:35:22 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:02.364 05:35:22 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:02.364 05:35:22 env -- scripts/common.sh@368 -- # return 0 00:24:02.364 05:35:22 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:02.364 05:35:22 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:02.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.364 --rc genhtml_branch_coverage=1 00:24:02.364 --rc genhtml_function_coverage=1 00:24:02.364 --rc genhtml_legend=1 00:24:02.364 --rc geninfo_all_blocks=1 00:24:02.364 --rc geninfo_unexecuted_blocks=1 00:24:02.364 00:24:02.364 ' 00:24:02.364 05:35:22 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:02.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.364 --rc genhtml_branch_coverage=1 00:24:02.364 --rc genhtml_function_coverage=1 00:24:02.364 --rc genhtml_legend=1 00:24:02.364 --rc geninfo_all_blocks=1 00:24:02.364 --rc geninfo_unexecuted_blocks=1 00:24:02.364 00:24:02.364 ' 00:24:02.364 05:35:22 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:02.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.364 --rc genhtml_branch_coverage=1 00:24:02.364 --rc genhtml_function_coverage=1 00:24:02.364 --rc genhtml_legend=1 00:24:02.364 --rc geninfo_all_blocks=1 00:24:02.364 --rc geninfo_unexecuted_blocks=1 00:24:02.364 00:24:02.364 ' 00:24:02.364 05:35:22 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:02.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.364 --rc genhtml_branch_coverage=1 00:24:02.364 --rc genhtml_function_coverage=1 00:24:02.364 --rc genhtml_legend=1 00:24:02.364 --rc geninfo_all_blocks=1 00:24:02.364 --rc geninfo_unexecuted_blocks=1 00:24:02.364 00:24:02.364 ' 00:24:02.364 05:35:22 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:24:02.364 05:35:22 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:02.364 05:35:22 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:02.364 05:35:22 env -- common/autotest_common.sh@10 -- # set +x 00:24:02.364 ************************************ 00:24:02.364 START TEST env_memory 00:24:02.364 ************************************ 00:24:02.364 05:35:22 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:24:02.364 00:24:02.364 00:24:02.364 CUnit - A unit testing framework for C - Version 2.1-3 00:24:02.364 http://cunit.sourceforge.net/ 00:24:02.364 00:24:02.364 00:24:02.364 Suite: memory 00:24:02.364 Test: alloc and free memory map ...[2024-11-20 05:35:22.107341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:24:02.364 passed 00:24:02.364 Test: mem map translation ...[2024-11-20 05:35:22.141544] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:24:02.364 [2024-11-20 05:35:22.141624] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:24:02.364 [2024-11-20 05:35:22.141688] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:24:02.364 [2024-11-20 05:35:22.141707] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:24:02.364 passed 00:24:02.364 Test: mem map registration ...[2024-11-20 05:35:22.206029] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:24:02.364 [2024-11-20 05:35:22.206102] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:24:02.364 passed 00:24:02.623 Test: mem map adjacent registrations ...passed 00:24:02.623 00:24:02.623 Run Summary: Type Total Ran Passed Failed Inactive 00:24:02.623 suites 1 1 n/a 0 0 00:24:02.623 tests 4 4 4 0 0 00:24:02.623 asserts 152 152 152 0 n/a 00:24:02.623 00:24:02.623 Elapsed time = 0.218 seconds 00:24:02.623 00:24:02.623 real 0m0.240s 00:24:02.623 user 0m0.218s 00:24:02.623 sys 0m0.018s 00:24:02.623 05:35:22 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:02.623 05:35:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:24:02.623 ************************************ 00:24:02.623 END TEST env_memory 00:24:02.623 ************************************ 00:24:02.623 05:35:22 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:24:02.623 05:35:22 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:02.623 05:35:22 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:02.623 05:35:22 env -- common/autotest_common.sh@10 -- # set +x 00:24:02.623 ************************************ 00:24:02.623 START TEST env_vtophys 00:24:02.623 ************************************ 00:24:02.623 05:35:22 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:24:02.623 EAL: lib.eal log level changed from notice to debug 00:24:02.623 EAL: Detected lcore 0 as core 0 on socket 0 00:24:02.623 EAL: Detected lcore 1 as core 0 on socket 0 00:24:02.623 EAL: Detected lcore 2 as core 0 on socket 0 00:24:02.623 EAL: Detected lcore 3 as core 0 on socket 0 00:24:02.623 EAL: Detected lcore 4 as core 0 on socket 0 00:24:02.623 EAL: Detected lcore 5 as core 0 on socket 0 00:24:02.623 EAL: Detected lcore 6 as core 0 on socket 0 00:24:02.623 EAL: Detected lcore 7 as core 0 on socket 0 00:24:02.623 EAL: Detected lcore 8 as core 0 on socket 0 00:24:02.623 EAL: Detected lcore 9 as core 0 on socket 0 00:24:02.623 EAL: Maximum logical cores by configuration: 128 00:24:02.623 EAL: Detected CPU lcores: 10 00:24:02.623 EAL: Detected NUMA nodes: 1 00:24:02.623 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:24:02.623 EAL: Detected shared linkage of DPDK 00:24:02.623 EAL: No shared files mode enabled, IPC will be disabled 00:24:02.623 EAL: Selected IOVA mode 'PA' 00:24:02.624 EAL: Probing VFIO support... 00:24:02.624 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:24:02.624 EAL: VFIO modules not loaded, skipping VFIO support... 00:24:02.624 EAL: Ask a virtual area of 0x2e000 bytes 00:24:02.624 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:24:02.624 EAL: Setting up physically contiguous memory... 00:24:02.624 EAL: Setting maximum number of open files to 524288 00:24:02.624 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:24:02.624 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:24:02.624 EAL: Ask a virtual area of 0x61000 bytes 00:24:02.624 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:24:02.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:24:02.624 EAL: Ask a virtual area of 0x400000000 bytes 00:24:02.624 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:24:02.624 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:24:02.624 EAL: Ask a virtual area of 0x61000 bytes 00:24:02.624 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:24:02.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:24:02.624 EAL: Ask a virtual area of 0x400000000 bytes 00:24:02.624 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:24:02.624 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:24:02.624 EAL: Ask a virtual area of 0x61000 bytes 00:24:02.624 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:24:02.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:24:02.624 EAL: Ask a virtual area of 0x400000000 bytes 00:24:02.624 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:24:02.624 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:24:02.624 EAL: Ask a virtual area of 0x61000 bytes 00:24:02.624 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:24:02.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:24:02.624 EAL: Ask a virtual area of 0x400000000 bytes 00:24:02.624 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:24:02.624 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:24:02.624 EAL: Hugepages will be freed exactly as allocated. 00:24:02.624 EAL: No shared files mode enabled, IPC is disabled 00:24:02.624 EAL: No shared files mode enabled, IPC is disabled 00:24:02.624 EAL: TSC frequency is ~2100000 KHz 00:24:02.624 EAL: Main lcore 0 is ready (tid=7f09b4ee8a00;cpuset=[0]) 00:24:02.624 EAL: Trying to obtain current memory policy. 00:24:02.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:24:02.624 EAL: Restoring previous memory policy: 0 00:24:02.624 EAL: request: mp_malloc_sync 00:24:02.624 EAL: No shared files mode enabled, IPC is disabled 00:24:02.624 EAL: Heap on socket 0 was expanded by 2MB 00:24:02.624 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:24:02.624 EAL: No PCI address specified using 'addr=' in: bus=pci 00:24:02.624 EAL: Mem event callback 'spdk:(nil)' registered 00:24:02.624 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:24:02.624 00:24:02.624 00:24:02.624 CUnit - A unit testing framework for C - Version 2.1-3 00:24:02.624 http://cunit.sourceforge.net/ 00:24:02.624 00:24:02.624 00:24:02.624 Suite: components_suite 00:24:02.624 Test: vtophys_malloc_test ...passed 00:24:02.624 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:24:02.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:24:02.624 EAL: Restoring previous memory policy: 4 00:24:02.624 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.624 EAL: request: mp_malloc_sync 00:24:02.624 EAL: No shared files mode enabled, IPC is disabled 00:24:02.624 EAL: Heap on socket 0 was expanded by 4MB 00:24:02.624 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.624 EAL: request: mp_malloc_sync 00:24:02.624 EAL: No shared files mode enabled, IPC is disabled 00:24:02.624 EAL: Heap on socket 0 was shrunk by 4MB 00:24:02.624 EAL: Trying to obtain current memory policy. 00:24:02.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:24:02.624 EAL: Restoring previous memory policy: 4 00:24:02.624 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.624 EAL: request: mp_malloc_sync 00:24:02.624 EAL: No shared files mode enabled, IPC is disabled 00:24:02.624 EAL: Heap on socket 0 was expanded by 6MB 00:24:02.624 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.624 EAL: request: mp_malloc_sync 00:24:02.624 EAL: No shared files mode enabled, IPC is disabled 00:24:02.624 EAL: Heap on socket 0 was shrunk by 6MB 00:24:02.624 EAL: Trying to obtain current memory policy. 00:24:02.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:24:02.624 EAL: Restoring previous memory policy: 4 00:24:02.624 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.624 EAL: request: mp_malloc_sync 00:24:02.624 EAL: No shared files mode enabled, IPC is disabled 00:24:02.624 EAL: Heap on socket 0 was expanded by 10MB 00:24:02.624 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.624 EAL: request: mp_malloc_sync 00:24:02.624 EAL: No shared files mode enabled, IPC is disabled 00:24:02.624 EAL: Heap on socket 0 was shrunk by 10MB 00:24:02.624 EAL: Trying to obtain current memory policy. 00:24:02.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:24:02.624 EAL: Restoring previous memory policy: 4 00:24:02.624 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.624 EAL: request: mp_malloc_sync 00:24:02.624 EAL: No shared files mode enabled, IPC is disabled 00:24:02.624 EAL: Heap on socket 0 was expanded by 18MB 00:24:02.624 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.883 EAL: request: mp_malloc_sync 00:24:02.883 EAL: No shared files mode enabled, IPC is disabled 00:24:02.883 EAL: Heap on socket 0 was shrunk by 18MB 00:24:02.883 EAL: Trying to obtain current memory policy. 00:24:02.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:24:02.883 EAL: Restoring previous memory policy: 4 00:24:02.883 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.883 EAL: request: mp_malloc_sync 00:24:02.883 EAL: No shared files mode enabled, IPC is disabled 00:24:02.883 EAL: Heap on socket 0 was expanded by 34MB 00:24:02.883 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.883 EAL: request: mp_malloc_sync 00:24:02.883 EAL: No shared files mode enabled, IPC is disabled 00:24:02.883 EAL: Heap on socket 0 was shrunk by 34MB 00:24:02.883 EAL: Trying to obtain current memory policy. 00:24:02.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:24:02.883 EAL: Restoring previous memory policy: 4 00:24:02.883 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.883 EAL: request: mp_malloc_sync 00:24:02.883 EAL: No shared files mode enabled, IPC is disabled 00:24:02.883 EAL: Heap on socket 0 was expanded by 66MB 00:24:02.883 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.883 EAL: request: mp_malloc_sync 00:24:02.883 EAL: No shared files mode enabled, IPC is disabled 00:24:02.883 EAL: Heap on socket 0 was shrunk by 66MB 00:24:02.883 EAL: Trying to obtain current memory policy. 00:24:02.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:24:02.883 EAL: Restoring previous memory policy: 4 00:24:02.883 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.883 EAL: request: mp_malloc_sync 00:24:02.883 EAL: No shared files mode enabled, IPC is disabled 00:24:02.883 EAL: Heap on socket 0 was expanded by 130MB 00:24:02.883 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.883 EAL: request: mp_malloc_sync 00:24:02.883 EAL: No shared files mode enabled, IPC is disabled 00:24:02.883 EAL: Heap on socket 0 was shrunk by 130MB 00:24:02.883 EAL: Trying to obtain current memory policy. 00:24:02.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:24:02.883 EAL: Restoring previous memory policy: 4 00:24:02.883 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.883 EAL: request: mp_malloc_sync 00:24:02.883 EAL: No shared files mode enabled, IPC is disabled 00:24:02.883 EAL: Heap on socket 0 was expanded by 258MB 00:24:02.883 EAL: Calling mem event callback 'spdk:(nil)' 00:24:02.883 EAL: request: mp_malloc_sync 00:24:02.883 EAL: No shared files mode enabled, IPC is disabled 00:24:02.883 EAL: Heap on socket 0 was shrunk by 258MB 00:24:02.883 EAL: Trying to obtain current memory policy. 00:24:02.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:24:03.140 EAL: Restoring previous memory policy: 4 00:24:03.140 EAL: Calling mem event callback 'spdk:(nil)' 00:24:03.140 EAL: request: mp_malloc_sync 00:24:03.140 EAL: No shared files mode enabled, IPC is disabled 00:24:03.140 EAL: Heap on socket 0 was expanded by 514MB 00:24:03.140 EAL: Calling mem event callback 'spdk:(nil)' 00:24:03.398 EAL: request: mp_malloc_sync 00:24:03.398 EAL: No shared files mode enabled, IPC is disabled 00:24:03.398 EAL: Heap on socket 0 was shrunk by 514MB 00:24:03.398 EAL: Trying to obtain current memory policy. 00:24:03.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:24:03.655 EAL: Restoring previous memory policy: 4 00:24:03.655 EAL: Calling mem event callback 'spdk:(nil)' 00:24:03.655 EAL: request: mp_malloc_sync 00:24:03.655 EAL: No shared files mode enabled, IPC is disabled 00:24:03.655 EAL: Heap on socket 0 was expanded by 1026MB 00:24:03.655 EAL: Calling mem event callback 'spdk:(nil)' 00:24:03.915 passed 00:24:03.915 00:24:03.915 Run Summary: Type Total Ran Passed Failed Inactive 00:24:03.915 suites 1 1 n/a 0 0 00:24:03.915 tests 2 2 2 0 0 00:24:03.915 asserts 5421 5421 5421 0 n/a 00:24:03.915 00:24:03.915 Elapsed time = 1.148 seconds 00:24:03.915 EAL: request: mp_malloc_sync 00:24:03.915 EAL: No shared files mode enabled, IPC is disabled 00:24:03.915 EAL: Heap on socket 0 was shrunk by 1026MB 00:24:03.915 EAL: Calling mem event callback 'spdk:(nil)' 00:24:03.915 EAL: request: mp_malloc_sync 00:24:03.915 EAL: No shared files mode enabled, IPC is disabled 00:24:03.915 EAL: Heap on socket 0 was shrunk by 2MB 00:24:03.915 EAL: No shared files mode enabled, IPC is disabled 00:24:03.915 EAL: No shared files mode enabled, IPC is disabled 00:24:03.915 EAL: No shared files mode enabled, IPC is disabled 00:24:03.915 00:24:03.915 real 0m1.368s 00:24:03.915 user 0m0.753s 00:24:03.915 sys 0m0.472s 00:24:03.915 05:35:23 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:03.915 05:35:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:24:03.915 ************************************ 00:24:03.915 END TEST env_vtophys 00:24:03.915 ************************************ 00:24:03.915 05:35:23 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:24:03.915 05:35:23 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:03.915 05:35:23 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:03.915 05:35:23 env -- common/autotest_common.sh@10 -- # set +x 00:24:03.915 ************************************ 00:24:03.915 START TEST env_pci 00:24:03.915 ************************************ 00:24:03.915 05:35:23 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:24:03.915 00:24:03.915 00:24:03.915 CUnit - A unit testing framework for C - Version 2.1-3 00:24:03.915 http://cunit.sourceforge.net/ 00:24:03.915 00:24:03.915 00:24:03.915 Suite: pci 00:24:03.915 Test: pci_hook ...[2024-11-20 05:35:23.804944] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58610 has claimed it 00:24:03.915 passed 00:24:03.915 00:24:03.915 Run Summary: Type Total Ran Passed Failed Inactive 00:24:03.915 suites 1 1 n/a 0 0 00:24:03.915 tests 1 1 1 0 0 00:24:03.915 asserts 25 25 25 0 n/a 00:24:03.915 00:24:03.915 Elapsed time = 0.002 seconds 00:24:03.915 EAL: Cannot find device (10000:00:01.0) 00:24:03.915 EAL: Failed to attach device on primary process 00:24:03.915 00:24:03.915 real 0m0.026s 00:24:03.915 user 0m0.013s 00:24:03.915 sys 0m0.013s 00:24:03.915 05:35:23 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:03.915 05:35:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:24:03.915 ************************************ 00:24:03.915 END TEST env_pci 00:24:03.915 ************************************ 00:24:04.174 05:35:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:24:04.174 05:35:23 env -- env/env.sh@15 -- # uname 00:24:04.174 05:35:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:24:04.174 05:35:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:24:04.174 05:35:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:24:04.174 05:35:23 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:24:04.174 05:35:23 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:04.174 05:35:23 env -- common/autotest_common.sh@10 -- # set +x 00:24:04.174 ************************************ 00:24:04.174 START TEST env_dpdk_post_init 00:24:04.174 ************************************ 00:24:04.174 05:35:23 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:24:04.174 EAL: Detected CPU lcores: 10 00:24:04.174 EAL: Detected NUMA nodes: 1 00:24:04.174 EAL: Detected shared linkage of DPDK 00:24:04.174 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:24:04.174 EAL: Selected IOVA mode 'PA' 00:24:04.174 TELEMETRY: No legacy callbacks, legacy socket not created 00:24:04.174 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:24:04.174 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:24:04.174 Starting DPDK initialization... 00:24:04.174 Starting SPDK post initialization... 00:24:04.174 SPDK NVMe probe 00:24:04.174 Attaching to 0000:00:10.0 00:24:04.174 Attaching to 0000:00:11.0 00:24:04.174 Attached to 0000:00:10.0 00:24:04.174 Attached to 0000:00:11.0 00:24:04.174 Cleaning up... 00:24:04.174 00:24:04.174 real 0m0.208s 00:24:04.174 user 0m0.059s 00:24:04.174 sys 0m0.050s 00:24:04.174 05:35:24 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:04.174 05:35:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:24:04.174 ************************************ 00:24:04.174 END TEST env_dpdk_post_init 00:24:04.174 ************************************ 00:24:04.432 05:35:24 env -- env/env.sh@26 -- # uname 00:24:04.432 05:35:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:24:04.432 05:35:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:24:04.432 05:35:24 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:04.432 05:35:24 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:04.432 05:35:24 env -- common/autotest_common.sh@10 -- # set +x 00:24:04.432 ************************************ 00:24:04.432 START TEST env_mem_callbacks 00:24:04.432 ************************************ 00:24:04.432 05:35:24 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:24:04.432 EAL: Detected CPU lcores: 10 00:24:04.432 EAL: Detected NUMA nodes: 1 00:24:04.432 EAL: Detected shared linkage of DPDK 00:24:04.432 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:24:04.432 EAL: Selected IOVA mode 'PA' 00:24:04.432 TELEMETRY: No legacy callbacks, legacy socket not created 00:24:04.432 00:24:04.432 00:24:04.432 CUnit - A unit testing framework for C - Version 2.1-3 00:24:04.432 http://cunit.sourceforge.net/ 00:24:04.432 00:24:04.432 00:24:04.432 Suite: memory 00:24:04.432 Test: test ... 00:24:04.432 register 0x200000200000 2097152 00:24:04.432 malloc 3145728 00:24:04.432 register 0x200000400000 4194304 00:24:04.432 buf 0x200000500000 len 3145728 PASSED 00:24:04.432 malloc 64 00:24:04.432 buf 0x2000004fff40 len 64 PASSED 00:24:04.432 malloc 4194304 00:24:04.432 register 0x200000800000 6291456 00:24:04.432 buf 0x200000a00000 len 4194304 PASSED 00:24:04.433 free 0x200000500000 3145728 00:24:04.433 free 0x2000004fff40 64 00:24:04.433 unregister 0x200000400000 4194304 PASSED 00:24:04.433 free 0x200000a00000 4194304 00:24:04.433 unregister 0x200000800000 6291456 PASSED 00:24:04.433 malloc 8388608 00:24:04.433 register 0x200000400000 10485760 00:24:04.433 buf 0x200000600000 len 8388608 PASSED 00:24:04.433 free 0x200000600000 8388608 00:24:04.433 unregister 0x200000400000 10485760 PASSED 00:24:04.433 passed 00:24:04.433 00:24:04.433 Run Summary: Type Total Ran Passed Failed Inactive 00:24:04.433 suites 1 1 n/a 0 0 00:24:04.433 tests 1 1 1 0 0 00:24:04.433 asserts 15 15 15 0 n/a 00:24:04.433 00:24:04.433 Elapsed time = 0.011 seconds 00:24:04.433 00:24:04.433 real 0m0.157s 00:24:04.433 user 0m0.016s 00:24:04.433 sys 0m0.039s 00:24:04.433 05:35:24 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:04.433 05:35:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:24:04.433 ************************************ 00:24:04.433 END TEST env_mem_callbacks 00:24:04.433 ************************************ 00:24:04.691 00:24:04.691 real 0m2.538s 00:24:04.691 user 0m1.278s 00:24:04.691 sys 0m0.924s 00:24:04.691 05:35:24 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:04.691 05:35:24 env -- common/autotest_common.sh@10 -- # set +x 00:24:04.691 ************************************ 00:24:04.691 END TEST env 00:24:04.691 ************************************ 00:24:04.691 05:35:24 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:24:04.691 05:35:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:04.691 05:35:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:04.691 05:35:24 -- common/autotest_common.sh@10 -- # set +x 00:24:04.691 ************************************ 00:24:04.691 START TEST rpc 00:24:04.691 ************************************ 00:24:04.691 05:35:24 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:24:04.691 * Looking for test storage... 00:24:04.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:24:04.691 05:35:24 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:04.691 05:35:24 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:24:04.691 05:35:24 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:04.951 05:35:24 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:04.951 05:35:24 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.951 05:35:24 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.951 05:35:24 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.951 05:35:24 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.951 05:35:24 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.951 05:35:24 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.951 05:35:24 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.951 05:35:24 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.951 05:35:24 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.951 05:35:24 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.951 05:35:24 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.951 05:35:24 rpc -- scripts/common.sh@344 -- # case "$op" in 00:24:04.951 05:35:24 rpc -- scripts/common.sh@345 -- # : 1 00:24:04.951 05:35:24 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.951 05:35:24 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.951 05:35:24 rpc -- scripts/common.sh@365 -- # decimal 1 00:24:04.951 05:35:24 rpc -- scripts/common.sh@353 -- # local d=1 00:24:04.951 05:35:24 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.951 05:35:24 rpc -- scripts/common.sh@355 -- # echo 1 00:24:04.951 05:35:24 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.951 05:35:24 rpc -- scripts/common.sh@366 -- # decimal 2 00:24:04.951 05:35:24 rpc -- scripts/common.sh@353 -- # local d=2 00:24:04.951 05:35:24 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.951 05:35:24 rpc -- scripts/common.sh@355 -- # echo 2 00:24:04.951 05:35:24 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.951 05:35:24 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.951 05:35:24 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.951 05:35:24 rpc -- scripts/common.sh@368 -- # return 0 00:24:04.951 05:35:24 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.951 05:35:24 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:04.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.951 --rc genhtml_branch_coverage=1 00:24:04.951 --rc genhtml_function_coverage=1 00:24:04.951 --rc genhtml_legend=1 00:24:04.951 --rc geninfo_all_blocks=1 00:24:04.951 --rc geninfo_unexecuted_blocks=1 00:24:04.951 00:24:04.951 ' 00:24:04.951 05:35:24 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:04.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.951 --rc genhtml_branch_coverage=1 00:24:04.951 --rc genhtml_function_coverage=1 00:24:04.951 --rc genhtml_legend=1 00:24:04.951 --rc geninfo_all_blocks=1 00:24:04.951 --rc geninfo_unexecuted_blocks=1 00:24:04.951 00:24:04.951 ' 00:24:04.951 05:35:24 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:04.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.951 --rc genhtml_branch_coverage=1 00:24:04.951 --rc genhtml_function_coverage=1 00:24:04.951 --rc genhtml_legend=1 00:24:04.951 --rc geninfo_all_blocks=1 00:24:04.951 --rc geninfo_unexecuted_blocks=1 00:24:04.951 00:24:04.951 ' 00:24:04.951 05:35:24 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:04.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.951 --rc genhtml_branch_coverage=1 00:24:04.951 --rc genhtml_function_coverage=1 00:24:04.951 --rc genhtml_legend=1 00:24:04.951 --rc geninfo_all_blocks=1 00:24:04.951 --rc geninfo_unexecuted_blocks=1 00:24:04.951 00:24:04.951 ' 00:24:04.951 05:35:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58733 00:24:04.951 05:35:24 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:24:04.951 05:35:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:24:04.951 05:35:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58733 00:24:04.951 05:35:24 rpc -- common/autotest_common.sh@833 -- # '[' -z 58733 ']' 00:24:04.951 05:35:24 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.951 05:35:24 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:04.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.951 05:35:24 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.951 05:35:24 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:04.951 05:35:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:24:04.951 [2024-11-20 05:35:24.699244] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:04.951 [2024-11-20 05:35:24.699356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58733 ] 00:24:04.951 [2024-11-20 05:35:24.853901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.210 [2024-11-20 05:35:24.910378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:24:05.210 [2024-11-20 05:35:24.910434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58733' to capture a snapshot of events at runtime. 00:24:05.210 [2024-11-20 05:35:24.910472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.210 [2024-11-20 05:35:24.910493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.210 [2024-11-20 05:35:24.910501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58733 for offline analysis/debug. 00:24:05.210 [2024-11-20 05:35:24.910873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.468 05:35:25 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:05.468 05:35:25 rpc -- common/autotest_common.sh@866 -- # return 0 00:24:05.468 05:35:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:24:05.468 05:35:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:24:05.468 05:35:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:24:05.468 05:35:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:24:05.468 05:35:25 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:05.468 05:35:25 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:05.468 05:35:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:24:05.468 ************************************ 00:24:05.468 START TEST rpc_integrity 00:24:05.468 ************************************ 00:24:05.468 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:24:05.468 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:05.468 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.469 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:05.469 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.469 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:24:05.469 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:24:05.469 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:24:05.469 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:24:05.469 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.469 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:05.469 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.469 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:24:05.469 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:24:05.469 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.469 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:05.469 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.469 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:24:05.469 { 00:24:05.469 "aliases": [ 00:24:05.469 "5a269477-08ed-4650-a263-a3635d845777" 00:24:05.469 ], 00:24:05.469 "assigned_rate_limits": { 00:24:05.469 "r_mbytes_per_sec": 0, 00:24:05.469 "rw_ios_per_sec": 0, 00:24:05.469 "rw_mbytes_per_sec": 0, 00:24:05.469 "w_mbytes_per_sec": 0 00:24:05.469 }, 00:24:05.469 "block_size": 512, 00:24:05.469 "claimed": false, 00:24:05.469 "driver_specific": {}, 00:24:05.469 "memory_domains": [ 00:24:05.469 { 00:24:05.469 "dma_device_id": "system", 00:24:05.469 "dma_device_type": 1 00:24:05.469 }, 00:24:05.469 { 00:24:05.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.469 "dma_device_type": 2 00:24:05.469 } 00:24:05.469 ], 00:24:05.469 "name": "Malloc0", 00:24:05.469 "num_blocks": 16384, 00:24:05.469 "product_name": "Malloc disk", 00:24:05.469 "supported_io_types": { 00:24:05.469 "abort": true, 00:24:05.469 "compare": false, 00:24:05.469 "compare_and_write": false, 00:24:05.469 "copy": true, 00:24:05.469 "flush": true, 00:24:05.469 "get_zone_info": false, 00:24:05.469 "nvme_admin": false, 00:24:05.469 "nvme_io": false, 00:24:05.469 "nvme_io_md": false, 00:24:05.469 "nvme_iov_md": false, 00:24:05.469 "read": true, 00:24:05.469 "reset": true, 00:24:05.469 "seek_data": false, 00:24:05.469 "seek_hole": false, 00:24:05.469 "unmap": true, 00:24:05.469 "write": true, 00:24:05.469 "write_zeroes": true, 00:24:05.469 "zcopy": true, 00:24:05.469 "zone_append": false, 00:24:05.469 "zone_management": false 00:24:05.469 }, 00:24:05.469 "uuid": "5a269477-08ed-4650-a263-a3635d845777", 00:24:05.469 "zoned": false 00:24:05.469 } 00:24:05.469 ]' 00:24:05.469 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:24:05.469 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:24:05.469 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:24:05.469 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.469 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:05.469 [2024-11-20 05:35:25.331353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:24:05.469 [2024-11-20 05:35:25.331403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:05.469 [2024-11-20 05:35:25.331421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11b5c20 00:24:05.469 [2024-11-20 05:35:25.331431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:05.469 [2024-11-20 05:35:25.333088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:05.469 [2024-11-20 05:35:25.333130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:24:05.469 Passthru0 00:24:05.469 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.469 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:24:05.469 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.469 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:05.469 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.469 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:24:05.469 { 00:24:05.469 "aliases": [ 00:24:05.469 "5a269477-08ed-4650-a263-a3635d845777" 00:24:05.469 ], 00:24:05.469 "assigned_rate_limits": { 00:24:05.469 "r_mbytes_per_sec": 0, 00:24:05.469 "rw_ios_per_sec": 0, 00:24:05.469 "rw_mbytes_per_sec": 0, 00:24:05.469 "w_mbytes_per_sec": 0 00:24:05.469 }, 00:24:05.469 "block_size": 512, 00:24:05.469 "claim_type": "exclusive_write", 00:24:05.469 "claimed": true, 00:24:05.469 "driver_specific": {}, 00:24:05.469 "memory_domains": [ 00:24:05.469 { 00:24:05.469 "dma_device_id": "system", 00:24:05.469 "dma_device_type": 1 00:24:05.469 }, 00:24:05.469 { 00:24:05.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.469 "dma_device_type": 2 00:24:05.469 } 00:24:05.469 ], 00:24:05.469 "name": "Malloc0", 00:24:05.469 "num_blocks": 16384, 00:24:05.469 "product_name": "Malloc disk", 00:24:05.469 "supported_io_types": { 00:24:05.469 "abort": true, 00:24:05.469 "compare": false, 00:24:05.469 "compare_and_write": false, 00:24:05.469 "copy": true, 00:24:05.469 "flush": true, 00:24:05.469 "get_zone_info": false, 00:24:05.469 "nvme_admin": false, 00:24:05.469 "nvme_io": false, 00:24:05.469 "nvme_io_md": false, 00:24:05.469 "nvme_iov_md": false, 00:24:05.469 "read": true, 00:24:05.469 "reset": true, 00:24:05.469 "seek_data": false, 00:24:05.469 "seek_hole": false, 00:24:05.469 "unmap": true, 00:24:05.469 "write": true, 00:24:05.469 "write_zeroes": true, 00:24:05.469 "zcopy": true, 00:24:05.469 "zone_append": false, 00:24:05.469 "zone_management": false 00:24:05.469 }, 00:24:05.469 "uuid": "5a269477-08ed-4650-a263-a3635d845777", 00:24:05.469 "zoned": false 00:24:05.469 }, 00:24:05.469 { 00:24:05.469 "aliases": [ 00:24:05.469 "0e037eef-1ffc-5784-9a95-a25ff5688b6d" 00:24:05.469 ], 00:24:05.470 "assigned_rate_limits": { 00:24:05.470 "r_mbytes_per_sec": 0, 00:24:05.470 "rw_ios_per_sec": 0, 00:24:05.470 "rw_mbytes_per_sec": 0, 00:24:05.470 "w_mbytes_per_sec": 0 00:24:05.470 }, 00:24:05.470 "block_size": 512, 00:24:05.470 "claimed": false, 00:24:05.470 "driver_specific": { 00:24:05.470 "passthru": { 00:24:05.470 "base_bdev_name": "Malloc0", 00:24:05.470 "name": "Passthru0" 00:24:05.470 } 00:24:05.470 }, 00:24:05.470 "memory_domains": [ 00:24:05.470 { 00:24:05.470 "dma_device_id": "system", 00:24:05.470 "dma_device_type": 1 00:24:05.470 }, 00:24:05.470 { 00:24:05.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.470 "dma_device_type": 2 00:24:05.470 } 00:24:05.470 ], 00:24:05.470 "name": "Passthru0", 00:24:05.470 "num_blocks": 16384, 00:24:05.470 "product_name": "passthru", 00:24:05.470 "supported_io_types": { 00:24:05.470 "abort": true, 00:24:05.470 "compare": false, 00:24:05.470 "compare_and_write": false, 00:24:05.470 "copy": true, 00:24:05.470 "flush": true, 00:24:05.470 "get_zone_info": false, 00:24:05.470 "nvme_admin": false, 00:24:05.470 "nvme_io": false, 00:24:05.470 "nvme_io_md": false, 00:24:05.470 "nvme_iov_md": false, 00:24:05.470 "read": true, 00:24:05.470 "reset": true, 00:24:05.470 "seek_data": false, 00:24:05.470 "seek_hole": false, 00:24:05.470 "unmap": true, 00:24:05.470 "write": true, 00:24:05.470 "write_zeroes": true, 00:24:05.470 "zcopy": true, 00:24:05.470 "zone_append": false, 00:24:05.470 "zone_management": false 00:24:05.470 }, 00:24:05.470 "uuid": "0e037eef-1ffc-5784-9a95-a25ff5688b6d", 00:24:05.470 "zoned": false 00:24:05.470 } 00:24:05.470 ]' 00:24:05.470 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:24:05.729 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:24:05.730 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:24:05.730 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.730 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:05.730 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.730 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:05.730 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.730 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:05.730 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.730 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:24:05.730 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.730 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:05.730 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.730 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:24:05.730 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:24:05.730 05:35:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:24:05.730 00:24:05.730 real 0m0.338s 00:24:05.730 user 0m0.210s 00:24:05.730 sys 0m0.054s 00:24:05.730 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:05.730 05:35:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:05.730 ************************************ 00:24:05.730 END TEST rpc_integrity 00:24:05.730 ************************************ 00:24:05.730 05:35:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:24:05.730 05:35:25 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:05.730 05:35:25 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:05.730 05:35:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:24:05.730 ************************************ 00:24:05.730 START TEST rpc_plugins 00:24:05.730 ************************************ 00:24:05.730 05:35:25 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:24:05.730 05:35:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:24:05.730 05:35:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.730 05:35:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:24:05.730 05:35:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.730 05:35:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:24:05.730 05:35:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:24:05.730 05:35:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.730 05:35:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:24:05.730 05:35:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.730 05:35:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:24:05.730 { 00:24:05.730 "aliases": [ 00:24:05.730 "2169d439-9c23-41a2-842a-896381031a6b" 00:24:05.730 ], 00:24:05.730 "assigned_rate_limits": { 00:24:05.730 "r_mbytes_per_sec": 0, 00:24:05.730 "rw_ios_per_sec": 0, 00:24:05.730 "rw_mbytes_per_sec": 0, 00:24:05.730 "w_mbytes_per_sec": 0 00:24:05.730 }, 00:24:05.730 "block_size": 4096, 00:24:05.730 "claimed": false, 00:24:05.730 "driver_specific": {}, 00:24:05.730 "memory_domains": [ 00:24:05.730 { 00:24:05.730 "dma_device_id": "system", 00:24:05.730 "dma_device_type": 1 00:24:05.730 }, 00:24:05.730 { 00:24:05.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.730 "dma_device_type": 2 00:24:05.730 } 00:24:05.730 ], 00:24:05.730 "name": "Malloc1", 00:24:05.730 "num_blocks": 256, 00:24:05.730 "product_name": "Malloc disk", 00:24:05.730 "supported_io_types": { 00:24:05.730 "abort": true, 00:24:05.730 "compare": false, 00:24:05.730 "compare_and_write": false, 00:24:05.730 "copy": true, 00:24:05.730 "flush": true, 00:24:05.730 "get_zone_info": false, 00:24:05.730 "nvme_admin": false, 00:24:05.730 "nvme_io": false, 00:24:05.730 "nvme_io_md": false, 00:24:05.730 "nvme_iov_md": false, 00:24:05.730 "read": true, 00:24:05.730 "reset": true, 00:24:05.730 "seek_data": false, 00:24:05.730 "seek_hole": false, 00:24:05.730 "unmap": true, 00:24:05.730 "write": true, 00:24:05.730 "write_zeroes": true, 00:24:05.730 "zcopy": true, 00:24:05.730 "zone_append": false, 00:24:05.730 "zone_management": false 00:24:05.730 }, 00:24:05.730 "uuid": "2169d439-9c23-41a2-842a-896381031a6b", 00:24:05.730 "zoned": false 00:24:05.730 } 00:24:05.730 ]' 00:24:05.730 05:35:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:24:05.990 05:35:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:24:05.990 05:35:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:24:05.990 05:35:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.990 05:35:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:24:05.990 05:35:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.990 05:35:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:24:05.990 05:35:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.990 05:35:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:24:05.990 05:35:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.990 05:35:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:24:05.990 05:35:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:24:05.990 05:35:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:24:05.990 00:24:05.990 real 0m0.155s 00:24:05.990 user 0m0.086s 00:24:05.990 sys 0m0.031s 00:24:05.990 05:35:25 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:05.990 05:35:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:24:05.990 ************************************ 00:24:05.990 END TEST rpc_plugins 00:24:05.990 ************************************ 00:24:05.990 05:35:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:24:05.990 05:35:25 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:05.990 05:35:25 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:05.990 05:35:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:24:05.990 ************************************ 00:24:05.990 START TEST rpc_trace_cmd_test 00:24:05.990 ************************************ 00:24:05.990 05:35:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:24:05.990 05:35:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:24:05.990 05:35:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:24:05.990 05:35:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.990 05:35:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.990 05:35:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.990 05:35:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:24:05.990 "bdev": { 00:24:05.990 "mask": "0x8", 00:24:05.990 "tpoint_mask": "0xffffffffffffffff" 00:24:05.990 }, 00:24:05.990 "bdev_nvme": { 00:24:05.990 "mask": "0x4000", 00:24:05.990 "tpoint_mask": "0x0" 00:24:05.990 }, 00:24:05.990 "bdev_raid": { 00:24:05.990 "mask": "0x20000", 00:24:05.990 "tpoint_mask": "0x0" 00:24:05.990 }, 00:24:05.990 "blob": { 00:24:05.990 "mask": "0x10000", 00:24:05.990 "tpoint_mask": "0x0" 00:24:05.990 }, 00:24:05.990 "blobfs": { 00:24:05.990 "mask": "0x80", 00:24:05.990 "tpoint_mask": "0x0" 00:24:05.990 }, 00:24:05.990 "dsa": { 00:24:05.990 "mask": "0x200", 00:24:05.990 "tpoint_mask": "0x0" 00:24:05.990 }, 00:24:05.991 "ftl": { 00:24:05.991 "mask": "0x40", 00:24:05.991 "tpoint_mask": "0x0" 00:24:05.991 }, 00:24:05.991 "iaa": { 00:24:05.991 "mask": "0x1000", 00:24:05.991 "tpoint_mask": "0x0" 00:24:05.991 }, 00:24:05.991 "iscsi_conn": { 00:24:05.991 "mask": "0x2", 00:24:05.991 "tpoint_mask": "0x0" 00:24:05.991 }, 00:24:05.991 "nvme_pcie": { 00:24:05.991 "mask": "0x800", 00:24:05.991 "tpoint_mask": "0x0" 00:24:05.991 }, 00:24:05.991 "nvme_tcp": { 00:24:05.991 "mask": "0x2000", 00:24:05.991 "tpoint_mask": "0x0" 00:24:05.991 }, 00:24:05.991 "nvmf_rdma": { 00:24:05.991 "mask": "0x10", 00:24:05.991 "tpoint_mask": "0x0" 00:24:05.991 }, 00:24:05.991 "nvmf_tcp": { 00:24:05.991 "mask": "0x20", 00:24:05.991 "tpoint_mask": "0x0" 00:24:05.991 }, 00:24:05.991 "scheduler": { 00:24:05.991 "mask": "0x40000", 00:24:05.991 "tpoint_mask": "0x0" 00:24:05.991 }, 00:24:05.991 "scsi": { 00:24:05.991 "mask": "0x4", 00:24:05.991 "tpoint_mask": "0x0" 00:24:05.991 }, 00:24:05.991 "sock": { 00:24:05.991 "mask": "0x8000", 00:24:05.991 "tpoint_mask": "0x0" 00:24:05.991 }, 00:24:05.991 "thread": { 00:24:05.991 "mask": "0x400", 00:24:05.991 "tpoint_mask": "0x0" 00:24:05.991 }, 00:24:05.991 "tpoint_group_mask": "0x8", 00:24:05.991 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58733" 00:24:05.991 }' 00:24:05.991 05:35:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:24:05.991 05:35:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:24:05.991 05:35:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:24:05.991 05:35:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:24:05.991 05:35:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:24:06.251 05:35:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:24:06.251 05:35:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:24:06.251 05:35:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:24:06.251 05:35:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:24:06.251 05:35:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:24:06.251 00:24:06.251 real 0m0.240s 00:24:06.251 user 0m0.187s 00:24:06.251 sys 0m0.042s 00:24:06.251 ************************************ 00:24:06.251 END TEST rpc_trace_cmd_test 00:24:06.251 05:35:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:06.251 05:35:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.251 ************************************ 00:24:06.251 05:35:26 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:24:06.251 05:35:26 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:24:06.252 05:35:26 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:06.252 05:35:26 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:06.252 05:35:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:24:06.252 ************************************ 00:24:06.252 START TEST go_rpc 00:24:06.252 ************************************ 00:24:06.252 05:35:26 rpc.go_rpc -- common/autotest_common.sh@1127 -- # go_rpc 00:24:06.252 05:35:26 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:24:06.252 05:35:26 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:24:06.252 05:35:26 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:24:06.252 05:35:26 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:24:06.252 05:35:26 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:24:06.252 05:35:26 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.252 05:35:26 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:06.252 05:35:26 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.252 05:35:26 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:24:06.252 05:35:26 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:24:06.512 05:35:26 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["39eb6b28-a576-4bbd-8a50-7ae43556ecdc"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"39eb6b28-a576-4bbd-8a50-7ae43556ecdc","zoned":false}]' 00:24:06.512 05:35:26 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:24:06.512 05:35:26 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:24:06.512 05:35:26 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:24:06.512 05:35:26 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.512 05:35:26 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:06.512 05:35:26 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.512 05:35:26 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:24:06.512 05:35:26 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:24:06.512 05:35:26 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:24:06.512 05:35:26 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:24:06.512 00:24:06.512 real 0m0.221s 00:24:06.512 user 0m0.144s 00:24:06.512 sys 0m0.046s 00:24:06.512 05:35:26 rpc.go_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:06.512 05:35:26 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:06.512 ************************************ 00:24:06.512 END TEST go_rpc 00:24:06.512 ************************************ 00:24:06.512 05:35:26 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:24:06.512 05:35:26 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:24:06.512 05:35:26 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:06.512 05:35:26 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:06.512 05:35:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:24:06.512 ************************************ 00:24:06.512 START TEST rpc_daemon_integrity 00:24:06.512 ************************************ 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.512 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:06.770 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.771 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:24:06.771 { 00:24:06.771 "aliases": [ 00:24:06.771 "57576b39-d5f2-4268-861f-baf359f468c1" 00:24:06.771 ], 00:24:06.771 "assigned_rate_limits": { 00:24:06.771 "r_mbytes_per_sec": 0, 00:24:06.771 "rw_ios_per_sec": 0, 00:24:06.771 "rw_mbytes_per_sec": 0, 00:24:06.771 "w_mbytes_per_sec": 0 00:24:06.771 }, 00:24:06.771 "block_size": 512, 00:24:06.771 "claimed": false, 00:24:06.771 "driver_specific": {}, 00:24:06.771 "memory_domains": [ 00:24:06.771 { 00:24:06.771 "dma_device_id": "system", 00:24:06.771 "dma_device_type": 1 00:24:06.771 }, 00:24:06.771 { 00:24:06.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.771 "dma_device_type": 2 00:24:06.771 } 00:24:06.771 ], 00:24:06.771 "name": "Malloc3", 00:24:06.771 "num_blocks": 16384, 00:24:06.771 "product_name": "Malloc disk", 00:24:06.771 "supported_io_types": { 00:24:06.771 "abort": true, 00:24:06.771 "compare": false, 00:24:06.771 "compare_and_write": false, 00:24:06.771 "copy": true, 00:24:06.771 "flush": true, 00:24:06.771 "get_zone_info": false, 00:24:06.771 "nvme_admin": false, 00:24:06.771 "nvme_io": false, 00:24:06.771 "nvme_io_md": false, 00:24:06.771 "nvme_iov_md": false, 00:24:06.771 "read": true, 00:24:06.771 "reset": true, 00:24:06.771 "seek_data": false, 00:24:06.771 "seek_hole": false, 00:24:06.771 "unmap": true, 00:24:06.771 "write": true, 00:24:06.771 "write_zeroes": true, 00:24:06.771 "zcopy": true, 00:24:06.771 "zone_append": false, 00:24:06.771 "zone_management": false 00:24:06.771 }, 00:24:06.771 "uuid": "57576b39-d5f2-4268-861f-baf359f468c1", 00:24:06.771 "zoned": false 00:24:06.771 } 00:24:06.771 ]' 00:24:06.771 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:24:06.771 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:24:06.771 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:24:06.771 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.771 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:06.771 [2024-11-20 05:35:26.485230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:24:06.771 [2024-11-20 05:35:26.485295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:06.771 [2024-11-20 05:35:26.485315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x129fa40 00:24:06.771 [2024-11-20 05:35:26.485326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:06.771 [2024-11-20 05:35:26.486851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:06.771 [2024-11-20 05:35:26.486886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:24:06.771 Passthru0 00:24:06.771 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.771 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:24:06.771 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.771 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:06.771 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.771 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:24:06.771 { 00:24:06.771 "aliases": [ 00:24:06.771 "57576b39-d5f2-4268-861f-baf359f468c1" 00:24:06.771 ], 00:24:06.771 "assigned_rate_limits": { 00:24:06.771 "r_mbytes_per_sec": 0, 00:24:06.771 "rw_ios_per_sec": 0, 00:24:06.771 "rw_mbytes_per_sec": 0, 00:24:06.771 "w_mbytes_per_sec": 0 00:24:06.771 }, 00:24:06.771 "block_size": 512, 00:24:06.771 "claim_type": "exclusive_write", 00:24:06.771 "claimed": true, 00:24:06.771 "driver_specific": {}, 00:24:06.771 "memory_domains": [ 00:24:06.771 { 00:24:06.771 "dma_device_id": "system", 00:24:06.771 "dma_device_type": 1 00:24:06.771 }, 00:24:06.771 { 00:24:06.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.771 "dma_device_type": 2 00:24:06.771 } 00:24:06.771 ], 00:24:06.771 "name": "Malloc3", 00:24:06.771 "num_blocks": 16384, 00:24:06.771 "product_name": "Malloc disk", 00:24:06.771 "supported_io_types": { 00:24:06.771 "abort": true, 00:24:06.771 "compare": false, 00:24:06.771 "compare_and_write": false, 00:24:06.771 "copy": true, 00:24:06.771 "flush": true, 00:24:06.771 "get_zone_info": false, 00:24:06.771 "nvme_admin": false, 00:24:06.771 "nvme_io": false, 00:24:06.771 "nvme_io_md": false, 00:24:06.771 "nvme_iov_md": false, 00:24:06.771 "read": true, 00:24:06.771 "reset": true, 00:24:06.771 "seek_data": false, 00:24:06.771 "seek_hole": false, 00:24:06.771 "unmap": true, 00:24:06.771 "write": true, 00:24:06.771 "write_zeroes": true, 00:24:06.771 "zcopy": true, 00:24:06.771 "zone_append": false, 00:24:06.771 "zone_management": false 00:24:06.771 }, 00:24:06.771 "uuid": "57576b39-d5f2-4268-861f-baf359f468c1", 00:24:06.771 "zoned": false 00:24:06.772 }, 00:24:06.772 { 00:24:06.772 "aliases": [ 00:24:06.772 "f8a20f55-e426-5e11-8cea-23d7fb85a754" 00:24:06.772 ], 00:24:06.772 "assigned_rate_limits": { 00:24:06.772 "r_mbytes_per_sec": 0, 00:24:06.772 "rw_ios_per_sec": 0, 00:24:06.772 "rw_mbytes_per_sec": 0, 00:24:06.772 "w_mbytes_per_sec": 0 00:24:06.772 }, 00:24:06.772 "block_size": 512, 00:24:06.772 "claimed": false, 00:24:06.772 "driver_specific": { 00:24:06.772 "passthru": { 00:24:06.772 "base_bdev_name": "Malloc3", 00:24:06.772 "name": "Passthru0" 00:24:06.772 } 00:24:06.772 }, 00:24:06.772 "memory_domains": [ 00:24:06.772 { 00:24:06.772 "dma_device_id": "system", 00:24:06.772 "dma_device_type": 1 00:24:06.772 }, 00:24:06.772 { 00:24:06.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.772 "dma_device_type": 2 00:24:06.772 } 00:24:06.772 ], 00:24:06.772 "name": "Passthru0", 00:24:06.772 "num_blocks": 16384, 00:24:06.772 "product_name": "passthru", 00:24:06.772 "supported_io_types": { 00:24:06.772 "abort": true, 00:24:06.772 "compare": false, 00:24:06.772 "compare_and_write": false, 00:24:06.772 "copy": true, 00:24:06.772 "flush": true, 00:24:06.772 "get_zone_info": false, 00:24:06.772 "nvme_admin": false, 00:24:06.772 "nvme_io": false, 00:24:06.772 "nvme_io_md": false, 00:24:06.772 "nvme_iov_md": false, 00:24:06.772 "read": true, 00:24:06.772 "reset": true, 00:24:06.772 "seek_data": false, 00:24:06.772 "seek_hole": false, 00:24:06.772 "unmap": true, 00:24:06.772 "write": true, 00:24:06.772 "write_zeroes": true, 00:24:06.772 "zcopy": true, 00:24:06.772 "zone_append": false, 00:24:06.772 "zone_management": false 00:24:06.772 }, 00:24:06.772 "uuid": "f8a20f55-e426-5e11-8cea-23d7fb85a754", 00:24:06.772 "zoned": false 00:24:06.772 } 00:24:06.772 ]' 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:24:06.772 00:24:06.772 real 0m0.287s 00:24:06.772 user 0m0.171s 00:24:06.772 sys 0m0.052s 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:06.772 05:35:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:24:06.772 ************************************ 00:24:06.772 END TEST rpc_daemon_integrity 00:24:06.772 ************************************ 00:24:07.030 05:35:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:07.030 05:35:26 rpc -- rpc/rpc.sh@84 -- # killprocess 58733 00:24:07.030 05:35:26 rpc -- common/autotest_common.sh@952 -- # '[' -z 58733 ']' 00:24:07.030 05:35:26 rpc -- common/autotest_common.sh@956 -- # kill -0 58733 00:24:07.030 05:35:26 rpc -- common/autotest_common.sh@957 -- # uname 00:24:07.030 05:35:26 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:07.030 05:35:26 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58733 00:24:07.030 05:35:26 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:07.030 killing process with pid 58733 00:24:07.030 05:35:26 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:07.030 05:35:26 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58733' 00:24:07.030 05:35:26 rpc -- common/autotest_common.sh@971 -- # kill 58733 00:24:07.030 05:35:26 rpc -- common/autotest_common.sh@976 -- # wait 58733 00:24:07.596 00:24:07.596 real 0m2.888s 00:24:07.596 user 0m3.619s 00:24:07.596 sys 0m0.823s 00:24:07.596 05:35:27 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:07.596 05:35:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:24:07.596 ************************************ 00:24:07.596 END TEST rpc 00:24:07.596 ************************************ 00:24:07.596 05:35:27 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:24:07.596 05:35:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:07.596 05:35:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:07.596 05:35:27 -- common/autotest_common.sh@10 -- # set +x 00:24:07.596 ************************************ 00:24:07.596 START TEST skip_rpc 00:24:07.596 ************************************ 00:24:07.596 05:35:27 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:24:07.596 * Looking for test storage... 00:24:07.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:24:07.596 05:35:27 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:07.596 05:35:27 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:24:07.596 05:35:27 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:07.855 05:35:27 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:07.855 05:35:27 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:07.855 05:35:27 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@345 -- # : 1 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:07.856 05:35:27 skip_rpc -- scripts/common.sh@368 -- # return 0 00:24:07.856 05:35:27 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:07.856 05:35:27 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:07.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.856 --rc genhtml_branch_coverage=1 00:24:07.856 --rc genhtml_function_coverage=1 00:24:07.856 --rc genhtml_legend=1 00:24:07.856 --rc geninfo_all_blocks=1 00:24:07.856 --rc geninfo_unexecuted_blocks=1 00:24:07.856 00:24:07.856 ' 00:24:07.856 05:35:27 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:07.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.856 --rc genhtml_branch_coverage=1 00:24:07.856 --rc genhtml_function_coverage=1 00:24:07.856 --rc genhtml_legend=1 00:24:07.856 --rc geninfo_all_blocks=1 00:24:07.856 --rc geninfo_unexecuted_blocks=1 00:24:07.856 00:24:07.856 ' 00:24:07.856 05:35:27 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:07.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.856 --rc genhtml_branch_coverage=1 00:24:07.856 --rc genhtml_function_coverage=1 00:24:07.856 --rc genhtml_legend=1 00:24:07.856 --rc geninfo_all_blocks=1 00:24:07.856 --rc geninfo_unexecuted_blocks=1 00:24:07.856 00:24:07.856 ' 00:24:07.856 05:35:27 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:07.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.856 --rc genhtml_branch_coverage=1 00:24:07.856 --rc genhtml_function_coverage=1 00:24:07.856 --rc genhtml_legend=1 00:24:07.856 --rc geninfo_all_blocks=1 00:24:07.856 --rc geninfo_unexecuted_blocks=1 00:24:07.856 00:24:07.856 ' 00:24:07.856 05:35:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:24:07.856 05:35:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:24:07.856 05:35:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:24:07.856 05:35:27 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:07.856 05:35:27 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:07.856 05:35:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:07.856 ************************************ 00:24:07.856 START TEST skip_rpc 00:24:07.856 ************************************ 00:24:07.856 05:35:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:24:07.856 05:35:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58989 00:24:07.856 05:35:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:24:07.856 05:35:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:24:07.856 05:35:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:24:07.856 [2024-11-20 05:35:27.652250] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:07.856 [2024-11-20 05:35:27.652369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58989 ] 00:24:08.114 [2024-11-20 05:35:27.810951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.114 [2024-11-20 05:35:27.906752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:13.377 2024/11/20 05:35:32 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58989 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 58989 ']' 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 58989 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58989 00:24:13.377 killing process with pid 58989 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58989' 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 58989 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 58989 00:24:13.377 00:24:13.377 real 0m5.397s 00:24:13.377 user 0m4.875s 00:24:13.377 sys 0m0.435s 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:13.377 ************************************ 00:24:13.377 05:35:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:13.377 END TEST skip_rpc 00:24:13.377 ************************************ 00:24:13.377 05:35:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:24:13.377 05:35:33 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:13.377 05:35:33 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:13.377 05:35:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:13.377 ************************************ 00:24:13.377 START TEST skip_rpc_with_json 00:24:13.377 ************************************ 00:24:13.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.377 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:24:13.378 05:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:24:13.378 05:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59081 00:24:13.378 05:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:24:13.378 05:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59081 00:24:13.378 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 59081 ']' 00:24:13.378 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.378 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:13.378 05:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:13.378 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.378 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:13.378 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:24:13.378 [2024-11-20 05:35:33.094334] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:13.378 [2024-11-20 05:35:33.094431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59081 ] 00:24:13.378 [2024-11-20 05:35:33.233616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.378 [2024-11-20 05:35:33.291707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.636 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:13.636 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:24:13.636 05:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:24:13.636 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.636 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:24:13.636 [2024-11-20 05:35:33.543193] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:24:13.636 2024/11/20 05:35:33 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:24:13.636 request: 00:24:13.636 { 00:24:13.636 "method": "nvmf_get_transports", 00:24:13.636 "params": { 00:24:13.636 "trtype": "tcp" 00:24:13.636 } 00:24:13.636 } 00:24:13.636 Got JSON-RPC error response 00:24:13.636 GoRPCClient: error on JSON-RPC call 00:24:13.636 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:13.636 05:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:24:13.636 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.636 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:24:13.895 [2024-11-20 05:35:33.555311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.895 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.895 05:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:24:13.895 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.895 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:24:13.895 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.895 05:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:24:13.895 { 00:24:13.895 "subsystems": [ 00:24:13.895 { 00:24:13.895 "subsystem": "fsdev", 00:24:13.895 "config": [ 00:24:13.895 { 00:24:13.895 "method": "fsdev_set_opts", 00:24:13.895 "params": { 00:24:13.895 "fsdev_io_cache_size": 256, 00:24:13.895 "fsdev_io_pool_size": 65535 00:24:13.895 } 00:24:13.895 } 00:24:13.895 ] 00:24:13.895 }, 00:24:13.895 { 00:24:13.895 "subsystem": "keyring", 00:24:13.895 "config": [] 00:24:13.895 }, 00:24:13.895 { 00:24:13.895 "subsystem": "iobuf", 00:24:13.895 "config": [ 00:24:13.895 { 00:24:13.895 "method": "iobuf_set_options", 00:24:13.895 "params": { 00:24:13.895 "enable_numa": false, 00:24:13.895 "large_bufsize": 135168, 00:24:13.895 "large_pool_count": 1024, 00:24:13.895 "small_bufsize": 8192, 00:24:13.895 "small_pool_count": 8192 00:24:13.895 } 00:24:13.895 } 00:24:13.895 ] 00:24:13.895 }, 00:24:13.895 { 00:24:13.895 "subsystem": "sock", 00:24:13.895 "config": [ 00:24:13.895 { 00:24:13.895 "method": "sock_set_default_impl", 00:24:13.895 "params": { 00:24:13.895 "impl_name": "posix" 00:24:13.895 } 00:24:13.895 }, 00:24:13.895 { 00:24:13.895 "method": "sock_impl_set_options", 00:24:13.895 "params": { 00:24:13.895 "enable_ktls": false, 00:24:13.895 "enable_placement_id": 0, 00:24:13.895 "enable_quickack": false, 00:24:13.895 "enable_recv_pipe": true, 00:24:13.895 "enable_zerocopy_send_client": false, 00:24:13.895 "enable_zerocopy_send_server": true, 00:24:13.895 "impl_name": "ssl", 00:24:13.895 "recv_buf_size": 4096, 00:24:13.895 "send_buf_size": 4096, 00:24:13.895 "tls_version": 0, 00:24:13.895 "zerocopy_threshold": 0 00:24:13.895 } 00:24:13.895 }, 00:24:13.895 { 00:24:13.895 "method": "sock_impl_set_options", 00:24:13.895 "params": { 00:24:13.895 "enable_ktls": false, 00:24:13.895 "enable_placement_id": 0, 00:24:13.895 "enable_quickack": false, 00:24:13.895 "enable_recv_pipe": true, 00:24:13.895 "enable_zerocopy_send_client": false, 00:24:13.895 "enable_zerocopy_send_server": true, 00:24:13.895 "impl_name": "posix", 00:24:13.895 "recv_buf_size": 2097152, 00:24:13.895 "send_buf_size": 2097152, 00:24:13.895 "tls_version": 0, 00:24:13.895 "zerocopy_threshold": 0 00:24:13.895 } 00:24:13.895 } 00:24:13.895 ] 00:24:13.895 }, 00:24:13.895 { 00:24:13.895 "subsystem": "vmd", 00:24:13.895 "config": [] 00:24:13.895 }, 00:24:13.895 { 00:24:13.895 "subsystem": "accel", 00:24:13.895 "config": [ 00:24:13.895 { 00:24:13.895 "method": "accel_set_options", 00:24:13.895 "params": { 00:24:13.895 "buf_count": 2048, 00:24:13.895 "large_cache_size": 16, 00:24:13.895 "sequence_count": 2048, 00:24:13.895 "small_cache_size": 128, 00:24:13.896 "task_count": 2048 00:24:13.896 } 00:24:13.896 } 00:24:13.896 ] 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "subsystem": "bdev", 00:24:13.896 "config": [ 00:24:13.896 { 00:24:13.896 "method": "bdev_set_options", 00:24:13.896 "params": { 00:24:13.896 "bdev_auto_examine": true, 00:24:13.896 "bdev_io_cache_size": 256, 00:24:13.896 "bdev_io_pool_size": 65535, 00:24:13.896 "iobuf_large_cache_size": 16, 00:24:13.896 "iobuf_small_cache_size": 128 00:24:13.896 } 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "method": "bdev_raid_set_options", 00:24:13.896 "params": { 00:24:13.896 "process_max_bandwidth_mb_sec": 0, 00:24:13.896 "process_window_size_kb": 1024 00:24:13.896 } 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "method": "bdev_iscsi_set_options", 00:24:13.896 "params": { 00:24:13.896 "timeout_sec": 30 00:24:13.896 } 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "method": "bdev_nvme_set_options", 00:24:13.896 "params": { 00:24:13.896 "action_on_timeout": "none", 00:24:13.896 "allow_accel_sequence": false, 00:24:13.896 "arbitration_burst": 0, 00:24:13.896 "bdev_retry_count": 3, 00:24:13.896 "ctrlr_loss_timeout_sec": 0, 00:24:13.896 "delay_cmd_submit": true, 00:24:13.896 "dhchap_dhgroups": [ 00:24:13.896 "null", 00:24:13.896 "ffdhe2048", 00:24:13.896 "ffdhe3072", 00:24:13.896 "ffdhe4096", 00:24:13.896 "ffdhe6144", 00:24:13.896 "ffdhe8192" 00:24:13.896 ], 00:24:13.896 "dhchap_digests": [ 00:24:13.896 "sha256", 00:24:13.896 "sha384", 00:24:13.896 "sha512" 00:24:13.896 ], 00:24:13.896 "disable_auto_failback": false, 00:24:13.896 "fast_io_fail_timeout_sec": 0, 00:24:13.896 "generate_uuids": false, 00:24:13.896 "high_priority_weight": 0, 00:24:13.896 "io_path_stat": false, 00:24:13.896 "io_queue_requests": 0, 00:24:13.896 "keep_alive_timeout_ms": 10000, 00:24:13.896 "low_priority_weight": 0, 00:24:13.896 "medium_priority_weight": 0, 00:24:13.896 "nvme_adminq_poll_period_us": 10000, 00:24:13.896 "nvme_error_stat": false, 00:24:13.896 "nvme_ioq_poll_period_us": 0, 00:24:13.896 "rdma_cm_event_timeout_ms": 0, 00:24:13.896 "rdma_max_cq_size": 0, 00:24:13.896 "rdma_srq_size": 0, 00:24:13.896 "reconnect_delay_sec": 0, 00:24:13.896 "timeout_admin_us": 0, 00:24:13.896 "timeout_us": 0, 00:24:13.896 "transport_ack_timeout": 0, 00:24:13.896 "transport_retry_count": 4, 00:24:13.896 "transport_tos": 0 00:24:13.896 } 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "method": "bdev_nvme_set_hotplug", 00:24:13.896 "params": { 00:24:13.896 "enable": false, 00:24:13.896 "period_us": 100000 00:24:13.896 } 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "method": "bdev_wait_for_examine" 00:24:13.896 } 00:24:13.896 ] 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "subsystem": "scsi", 00:24:13.896 "config": null 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "subsystem": "scheduler", 00:24:13.896 "config": [ 00:24:13.896 { 00:24:13.896 "method": "framework_set_scheduler", 00:24:13.896 "params": { 00:24:13.896 "name": "static" 00:24:13.896 } 00:24:13.896 } 00:24:13.896 ] 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "subsystem": "vhost_scsi", 00:24:13.896 "config": [] 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "subsystem": "vhost_blk", 00:24:13.896 "config": [] 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "subsystem": "ublk", 00:24:13.896 "config": [] 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "subsystem": "nbd", 00:24:13.896 "config": [] 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "subsystem": "nvmf", 00:24:13.896 "config": [ 00:24:13.896 { 00:24:13.896 "method": "nvmf_set_config", 00:24:13.896 "params": { 00:24:13.896 "admin_cmd_passthru": { 00:24:13.896 "identify_ctrlr": false 00:24:13.896 }, 00:24:13.896 "dhchap_dhgroups": [ 00:24:13.896 "null", 00:24:13.896 "ffdhe2048", 00:24:13.896 "ffdhe3072", 00:24:13.896 "ffdhe4096", 00:24:13.896 "ffdhe6144", 00:24:13.896 "ffdhe8192" 00:24:13.896 ], 00:24:13.896 "dhchap_digests": [ 00:24:13.896 "sha256", 00:24:13.896 "sha384", 00:24:13.896 "sha512" 00:24:13.896 ], 00:24:13.896 "discovery_filter": "match_any" 00:24:13.896 } 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "method": "nvmf_set_max_subsystems", 00:24:13.896 "params": { 00:24:13.896 "max_subsystems": 1024 00:24:13.896 } 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "method": "nvmf_set_crdt", 00:24:13.896 "params": { 00:24:13.896 "crdt1": 0, 00:24:13.896 "crdt2": 0, 00:24:13.896 "crdt3": 0 00:24:13.896 } 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "method": "nvmf_create_transport", 00:24:13.896 "params": { 00:24:13.896 "abort_timeout_sec": 1, 00:24:13.896 "ack_timeout": 0, 00:24:13.896 "buf_cache_size": 4294967295, 00:24:13.896 "c2h_success": true, 00:24:13.896 "data_wr_pool_size": 0, 00:24:13.896 "dif_insert_or_strip": false, 00:24:13.896 "in_capsule_data_size": 4096, 00:24:13.896 "io_unit_size": 131072, 00:24:13.896 "max_aq_depth": 128, 00:24:13.896 "max_io_qpairs_per_ctrlr": 127, 00:24:13.896 "max_io_size": 131072, 00:24:13.896 "max_queue_depth": 128, 00:24:13.896 "num_shared_buffers": 511, 00:24:13.896 "sock_priority": 0, 00:24:13.896 "trtype": "TCP", 00:24:13.896 "zcopy": false 00:24:13.896 } 00:24:13.896 } 00:24:13.896 ] 00:24:13.896 }, 00:24:13.896 { 00:24:13.896 "subsystem": "iscsi", 00:24:13.896 "config": [ 00:24:13.896 { 00:24:13.896 "method": "iscsi_set_options", 00:24:13.896 "params": { 00:24:13.896 "allow_duplicated_isid": false, 00:24:13.896 "chap_group": 0, 00:24:13.896 "data_out_pool_size": 2048, 00:24:13.896 "default_time2retain": 20, 00:24:13.896 "default_time2wait": 2, 00:24:13.896 "disable_chap": false, 00:24:13.896 "error_recovery_level": 0, 00:24:13.896 "first_burst_length": 8192, 00:24:13.896 "immediate_data": true, 00:24:13.896 "immediate_data_pool_size": 16384, 00:24:13.896 "max_connections_per_session": 2, 00:24:13.896 "max_large_datain_per_connection": 64, 00:24:13.896 "max_queue_depth": 64, 00:24:13.896 "max_r2t_per_connection": 4, 00:24:13.896 "max_sessions": 128, 00:24:13.896 "mutual_chap": false, 00:24:13.896 "node_base": "iqn.2016-06.io.spdk", 00:24:13.896 "nop_in_interval": 30, 00:24:13.896 "nop_timeout": 60, 00:24:13.896 "pdu_pool_size": 36864, 00:24:13.896 "require_chap": false 00:24:13.896 } 00:24:13.896 } 00:24:13.896 ] 00:24:13.896 } 00:24:13.896 ] 00:24:13.896 } 00:24:13.896 05:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:24:13.896 05:35:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59081 00:24:13.897 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 59081 ']' 00:24:13.897 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 59081 00:24:13.897 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:24:13.897 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:13.897 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59081 00:24:13.897 killing process with pid 59081 00:24:13.897 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:13.897 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:13.897 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59081' 00:24:13.897 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 59081 00:24:13.897 05:35:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 59081 00:24:14.463 05:35:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59107 00:24:14.463 05:35:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:24:14.463 05:35:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:24:19.745 05:35:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59107 00:24:19.745 05:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 59107 ']' 00:24:19.745 05:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 59107 00:24:19.745 05:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:24:19.745 05:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:19.745 05:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59107 00:24:19.745 killing process with pid 59107 00:24:19.746 05:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:19.746 05:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:19.746 05:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59107' 00:24:19.746 05:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 59107 00:24:19.746 05:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 59107 00:24:20.004 05:35:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:24:20.004 05:35:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:24:20.004 00:24:20.004 real 0m6.724s 00:24:20.004 user 0m6.316s 00:24:20.004 sys 0m0.608s 00:24:20.004 ************************************ 00:24:20.004 END TEST skip_rpc_with_json 00:24:20.004 ************************************ 00:24:20.004 05:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:20.004 05:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:24:20.004 05:35:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:24:20.005 05:35:39 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:20.005 05:35:39 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:20.005 05:35:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:20.005 ************************************ 00:24:20.005 START TEST skip_rpc_with_delay 00:24:20.005 ************************************ 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:24:20.005 [2024-11-20 05:35:39.881000] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:20.005 00:24:20.005 real 0m0.098s 00:24:20.005 user 0m0.067s 00:24:20.005 sys 0m0.029s 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:20.005 ************************************ 00:24:20.005 END TEST skip_rpc_with_delay 00:24:20.005 ************************************ 00:24:20.005 05:35:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:24:20.262 05:35:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:24:20.262 05:35:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:24:20.262 05:35:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:24:20.262 05:35:39 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:20.262 05:35:39 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:20.262 05:35:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:20.262 ************************************ 00:24:20.262 START TEST exit_on_failed_rpc_init 00:24:20.262 ************************************ 00:24:20.262 05:35:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:24:20.262 05:35:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59217 00:24:20.262 05:35:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:20.262 05:35:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59217 00:24:20.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.262 05:35:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 59217 ']' 00:24:20.262 05:35:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.262 05:35:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:20.262 05:35:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.262 05:35:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:20.262 05:35:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:24:20.262 [2024-11-20 05:35:40.042163] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:20.262 [2024-11-20 05:35:40.042281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59217 ] 00:24:20.521 [2024-11-20 05:35:40.205398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.521 [2024-11-20 05:35:40.303668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.456 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:21.456 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:24:21.456 05:35:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:24:21.456 05:35:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:24:21.456 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:24:21.456 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:24:21.456 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:21.456 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.456 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:21.456 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.456 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:21.456 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.456 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:21.456 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:24:21.456 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:24:21.456 [2024-11-20 05:35:41.170868] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:21.456 [2024-11-20 05:35:41.170978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59247 ] 00:24:21.456 [2024-11-20 05:35:41.324076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.715 [2024-11-20 05:35:41.391630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.715 [2024-11-20 05:35:41.391734] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:24:21.715 [2024-11-20 05:35:41.391753] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:24:21.715 [2024-11-20 05:35:41.391765] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59217 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 59217 ']' 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 59217 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59217 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59217' 00:24:21.715 killing process with pid 59217 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 59217 00:24:21.715 05:35:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 59217 00:24:22.281 00:24:22.281 real 0m2.116s 00:24:22.281 user 0m2.311s 00:24:22.281 sys 0m0.609s 00:24:22.281 05:35:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:22.281 ************************************ 00:24:22.281 END TEST exit_on_failed_rpc_init 00:24:22.281 ************************************ 00:24:22.281 05:35:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:24:22.281 05:35:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:24:22.281 00:24:22.281 real 0m14.784s 00:24:22.281 user 0m13.751s 00:24:22.281 sys 0m1.947s 00:24:22.281 05:35:42 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:22.281 ************************************ 00:24:22.281 END TEST skip_rpc 00:24:22.281 ************************************ 00:24:22.281 05:35:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:22.281 05:35:42 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:24:22.281 05:35:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:22.281 05:35:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:22.282 05:35:42 -- common/autotest_common.sh@10 -- # set +x 00:24:22.282 ************************************ 00:24:22.282 START TEST rpc_client 00:24:22.282 ************************************ 00:24:22.282 05:35:42 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:24:22.540 * Looking for test storage... 00:24:22.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:24:22.540 05:35:42 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:22.540 05:35:42 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:24:22.540 05:35:42 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:22.540 05:35:42 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@345 -- # : 1 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@353 -- # local d=1 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@355 -- # echo 1 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@353 -- # local d=2 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@355 -- # echo 2 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:22.540 05:35:42 rpc_client -- scripts/common.sh@368 -- # return 0 00:24:22.540 05:35:42 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:22.540 05:35:42 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:22.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.540 --rc genhtml_branch_coverage=1 00:24:22.540 --rc genhtml_function_coverage=1 00:24:22.540 --rc genhtml_legend=1 00:24:22.540 --rc geninfo_all_blocks=1 00:24:22.540 --rc geninfo_unexecuted_blocks=1 00:24:22.540 00:24:22.540 ' 00:24:22.540 05:35:42 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:22.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.540 --rc genhtml_branch_coverage=1 00:24:22.540 --rc genhtml_function_coverage=1 00:24:22.540 --rc genhtml_legend=1 00:24:22.540 --rc geninfo_all_blocks=1 00:24:22.540 --rc geninfo_unexecuted_blocks=1 00:24:22.540 00:24:22.540 ' 00:24:22.540 05:35:42 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:22.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.540 --rc genhtml_branch_coverage=1 00:24:22.540 --rc genhtml_function_coverage=1 00:24:22.540 --rc genhtml_legend=1 00:24:22.540 --rc geninfo_all_blocks=1 00:24:22.540 --rc geninfo_unexecuted_blocks=1 00:24:22.540 00:24:22.540 ' 00:24:22.540 05:35:42 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:22.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.540 --rc genhtml_branch_coverage=1 00:24:22.540 --rc genhtml_function_coverage=1 00:24:22.540 --rc genhtml_legend=1 00:24:22.540 --rc geninfo_all_blocks=1 00:24:22.540 --rc geninfo_unexecuted_blocks=1 00:24:22.540 00:24:22.540 ' 00:24:22.540 05:35:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:24:22.540 OK 00:24:22.540 05:35:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:24:22.540 00:24:22.540 real 0m0.228s 00:24:22.540 user 0m0.123s 00:24:22.540 sys 0m0.120s 00:24:22.540 05:35:42 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:22.540 ************************************ 00:24:22.540 END TEST rpc_client 00:24:22.540 ************************************ 00:24:22.540 05:35:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:24:22.800 05:35:42 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:24:22.800 05:35:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:22.800 05:35:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:22.800 05:35:42 -- common/autotest_common.sh@10 -- # set +x 00:24:22.800 ************************************ 00:24:22.800 START TEST json_config 00:24:22.800 ************************************ 00:24:22.800 05:35:42 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:24:22.800 05:35:42 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:22.800 05:35:42 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:24:22.800 05:35:42 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:22.800 05:35:42 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:22.800 05:35:42 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:22.800 05:35:42 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:22.800 05:35:42 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:22.800 05:35:42 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:24:22.800 05:35:42 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:24:22.800 05:35:42 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:24:22.800 05:35:42 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:24:22.800 05:35:42 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:24:22.800 05:35:42 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:24:22.800 05:35:42 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:24:22.800 05:35:42 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:22.800 05:35:42 json_config -- scripts/common.sh@344 -- # case "$op" in 00:24:22.800 05:35:42 json_config -- scripts/common.sh@345 -- # : 1 00:24:22.800 05:35:42 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:22.800 05:35:42 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:22.800 05:35:42 json_config -- scripts/common.sh@365 -- # decimal 1 00:24:22.800 05:35:42 json_config -- scripts/common.sh@353 -- # local d=1 00:24:22.800 05:35:42 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:22.800 05:35:42 json_config -- scripts/common.sh@355 -- # echo 1 00:24:22.800 05:35:42 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:24:22.800 05:35:42 json_config -- scripts/common.sh@366 -- # decimal 2 00:24:22.800 05:35:42 json_config -- scripts/common.sh@353 -- # local d=2 00:24:22.800 05:35:42 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:22.800 05:35:42 json_config -- scripts/common.sh@355 -- # echo 2 00:24:22.800 05:35:42 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:24:22.800 05:35:42 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:22.800 05:35:42 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:22.800 05:35:42 json_config -- scripts/common.sh@368 -- # return 0 00:24:22.800 05:35:42 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:22.800 05:35:42 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:22.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.800 --rc genhtml_branch_coverage=1 00:24:22.800 --rc genhtml_function_coverage=1 00:24:22.800 --rc genhtml_legend=1 00:24:22.800 --rc geninfo_all_blocks=1 00:24:22.800 --rc geninfo_unexecuted_blocks=1 00:24:22.800 00:24:22.800 ' 00:24:22.800 05:35:42 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:22.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.800 --rc genhtml_branch_coverage=1 00:24:22.800 --rc genhtml_function_coverage=1 00:24:22.800 --rc genhtml_legend=1 00:24:22.800 --rc geninfo_all_blocks=1 00:24:22.800 --rc geninfo_unexecuted_blocks=1 00:24:22.800 00:24:22.800 ' 00:24:22.800 05:35:42 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:22.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.800 --rc genhtml_branch_coverage=1 00:24:22.800 --rc genhtml_function_coverage=1 00:24:22.800 --rc genhtml_legend=1 00:24:22.800 --rc geninfo_all_blocks=1 00:24:22.800 --rc geninfo_unexecuted_blocks=1 00:24:22.800 00:24:22.800 ' 00:24:22.800 05:35:42 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:22.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.800 --rc genhtml_branch_coverage=1 00:24:22.800 --rc genhtml_function_coverage=1 00:24:22.800 --rc genhtml_legend=1 00:24:22.800 --rc geninfo_all_blocks=1 00:24:22.800 --rc geninfo_unexecuted_blocks=1 00:24:22.800 00:24:22.800 ' 00:24:22.800 05:35:42 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:22.800 05:35:42 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:24:22.800 05:35:42 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.800 05:35:42 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.800 05:35:42 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.800 05:35:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.800 05:35:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.800 05:35:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.800 05:35:42 json_config -- paths/export.sh@5 -- # export PATH 00:24:22.800 05:35:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@51 -- # : 0 00:24:22.800 05:35:42 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:22.801 05:35:42 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:22.801 05:35:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.801 05:35:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.801 05:35:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.801 05:35:42 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:22.801 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:22.801 05:35:42 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:22.801 05:35:42 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:22.801 05:35:42 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:24:22.801 INFO: JSON configuration test init 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:24:22.801 05:35:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:22.801 05:35:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:24:22.801 05:35:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:22.801 05:35:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:22.801 05:35:42 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:24:22.801 05:35:42 json_config -- json_config/common.sh@9 -- # local app=target 00:24:22.801 05:35:42 json_config -- json_config/common.sh@10 -- # shift 00:24:22.801 05:35:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:24:22.801 05:35:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:24:22.801 05:35:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:24:22.801 05:35:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:24:22.801 05:35:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:24:22.801 05:35:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59392 00:24:22.801 Waiting for target to run... 00:24:22.801 05:35:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:24:22.801 05:35:42 json_config -- json_config/common.sh@25 -- # waitforlisten 59392 /var/tmp/spdk_tgt.sock 00:24:22.801 05:35:42 json_config -- common/autotest_common.sh@833 -- # '[' -z 59392 ']' 00:24:22.801 05:35:42 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:24:22.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:24:22.801 05:35:42 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:24:22.801 05:35:42 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:22.801 05:35:42 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:24:22.801 05:35:42 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:22.801 05:35:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:23.059 [2024-11-20 05:35:42.780581] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:23.060 [2024-11-20 05:35:42.780692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59392 ] 00:24:23.318 [2024-11-20 05:35:43.175744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.318 [2024-11-20 05:35:43.230174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.253 05:35:43 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:24.253 05:35:43 json_config -- common/autotest_common.sh@866 -- # return 0 00:24:24.253 00:24:24.253 05:35:43 json_config -- json_config/common.sh@26 -- # echo '' 00:24:24.253 05:35:43 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:24:24.253 05:35:43 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:24:24.253 05:35:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:24.253 05:35:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:24.253 05:35:43 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:24:24.253 05:35:43 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:24:24.253 05:35:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:24.253 05:35:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:24.253 05:35:43 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:24:24.253 05:35:43 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:24:24.253 05:35:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:24:24.819 05:35:44 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:24:24.819 05:35:44 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:24:24.819 05:35:44 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:24.819 05:35:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:24.819 05:35:44 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:24:24.819 05:35:44 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:24:24.820 05:35:44 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:24:24.820 05:35:44 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:24:24.820 05:35:44 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:24:24.820 05:35:44 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:24:24.820 05:35:44 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:24:24.820 05:35:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:24:25.078 05:35:44 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:24:25.078 05:35:44 json_config -- json_config/json_config.sh@51 -- # local get_types 00:24:25.078 05:35:44 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:24:25.078 05:35:44 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:24:25.078 05:35:44 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:24:25.078 05:35:44 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:24:25.078 05:35:44 json_config -- json_config/json_config.sh@54 -- # sort 00:24:25.078 05:35:44 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:24:25.078 05:35:44 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:24:25.078 05:35:44 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:24:25.078 05:35:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:25.078 05:35:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:25.078 05:35:44 json_config -- json_config/json_config.sh@62 -- # return 0 00:24:25.078 05:35:44 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:24:25.078 05:35:44 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:24:25.078 05:35:44 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:24:25.079 05:35:44 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:24:25.079 05:35:44 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:24:25.079 05:35:44 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:24:25.079 05:35:44 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:25.079 05:35:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:25.079 05:35:44 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:24:25.079 05:35:44 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:24:25.079 05:35:44 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:24:25.079 05:35:44 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:24:25.079 05:35:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:24:25.338 MallocForNvmf0 00:24:25.338 05:35:45 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:24:25.338 05:35:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:24:25.903 MallocForNvmf1 00:24:25.904 05:35:45 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:24:25.904 05:35:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:24:25.904 [2024-11-20 05:35:45.805189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.160 05:35:45 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:26.160 05:35:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:26.418 05:35:46 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:24:26.418 05:35:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:24:26.676 05:35:46 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:24:26.676 05:35:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:24:26.987 05:35:46 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:24:26.987 05:35:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:24:27.269 [2024-11-20 05:35:47.149876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:27.269 05:35:47 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:24:27.269 05:35:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.269 05:35:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:27.528 05:35:47 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:24:27.528 05:35:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.528 05:35:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:27.528 05:35:47 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:24:27.528 05:35:47 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:24:27.528 05:35:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:24:27.786 MallocBdevForConfigChangeCheck 00:24:27.786 05:35:47 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:24:27.786 05:35:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.786 05:35:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:27.786 05:35:47 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:24:27.786 05:35:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:24:28.352 INFO: shutting down applications... 00:24:28.352 05:35:48 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:24:28.352 05:35:48 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:24:28.352 05:35:48 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:24:28.352 05:35:48 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:24:28.352 05:35:48 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:24:28.611 Calling clear_iscsi_subsystem 00:24:28.611 Calling clear_nvmf_subsystem 00:24:28.611 Calling clear_nbd_subsystem 00:24:28.611 Calling clear_ublk_subsystem 00:24:28.611 Calling clear_vhost_blk_subsystem 00:24:28.611 Calling clear_vhost_scsi_subsystem 00:24:28.611 Calling clear_bdev_subsystem 00:24:28.611 05:35:48 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:24:28.611 05:35:48 json_config -- json_config/json_config.sh@350 -- # count=100 00:24:28.611 05:35:48 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:24:28.611 05:35:48 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:24:28.611 05:35:48 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:24:28.611 05:35:48 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:24:29.176 05:35:48 json_config -- json_config/json_config.sh@352 -- # break 00:24:29.176 05:35:48 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:24:29.176 05:35:48 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:24:29.176 05:35:48 json_config -- json_config/common.sh@31 -- # local app=target 00:24:29.176 05:35:48 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:24:29.176 05:35:48 json_config -- json_config/common.sh@35 -- # [[ -n 59392 ]] 00:24:29.176 05:35:48 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59392 00:24:29.176 05:35:48 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:24:29.176 05:35:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:24:29.176 05:35:48 json_config -- json_config/common.sh@41 -- # kill -0 59392 00:24:29.176 05:35:48 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:24:29.753 05:35:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:24:29.753 05:35:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:24:29.753 05:35:49 json_config -- json_config/common.sh@41 -- # kill -0 59392 00:24:29.753 05:35:49 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:24:29.753 05:35:49 json_config -- json_config/common.sh@43 -- # break 00:24:29.753 05:35:49 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:24:29.753 SPDK target shutdown done 00:24:29.753 05:35:49 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:24:29.753 INFO: relaunching applications... 00:24:29.753 05:35:49 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:24:29.753 05:35:49 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:24:29.753 05:35:49 json_config -- json_config/common.sh@9 -- # local app=target 00:24:29.753 05:35:49 json_config -- json_config/common.sh@10 -- # shift 00:24:29.753 05:35:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:24:29.753 05:35:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:24:29.753 05:35:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:24:29.753 05:35:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:24:29.753 05:35:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:24:29.753 05:35:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59677 00:24:29.753 05:35:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:24:29.753 Waiting for target to run... 00:24:29.753 05:35:49 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:24:29.753 05:35:49 json_config -- json_config/common.sh@25 -- # waitforlisten 59677 /var/tmp/spdk_tgt.sock 00:24:29.753 05:35:49 json_config -- common/autotest_common.sh@833 -- # '[' -z 59677 ']' 00:24:29.753 05:35:49 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:24:29.753 05:35:49 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:29.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:24:29.753 05:35:49 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:24:29.753 05:35:49 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:29.753 05:35:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:29.753 [2024-11-20 05:35:49.499142] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:29.753 [2024-11-20 05:35:49.499260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59677 ] 00:24:30.321 [2024-11-20 05:35:50.071580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.321 [2024-11-20 05:35:50.124219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.579 [2024-11-20 05:35:50.477958] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.837 [2024-11-20 05:35:50.510069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:30.837 05:35:50 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:30.837 05:35:50 json_config -- common/autotest_common.sh@866 -- # return 0 00:24:30.837 00:24:30.837 05:35:50 json_config -- json_config/common.sh@26 -- # echo '' 00:24:30.837 05:35:50 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:24:30.837 INFO: Checking if target configuration is the same... 00:24:30.837 05:35:50 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:24:30.837 05:35:50 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:24:30.837 05:35:50 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:24:30.837 05:35:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:24:30.837 + '[' 2 -ne 2 ']' 00:24:30.837 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:24:30.837 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:24:30.837 + rootdir=/home/vagrant/spdk_repo/spdk 00:24:30.837 +++ basename /dev/fd/62 00:24:30.837 ++ mktemp /tmp/62.XXX 00:24:30.837 + tmp_file_1=/tmp/62.GaJ 00:24:30.837 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:24:30.837 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:24:30.837 + tmp_file_2=/tmp/spdk_tgt_config.json.SG0 00:24:30.837 + ret=0 00:24:30.837 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:24:31.405 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:24:31.405 + diff -u /tmp/62.GaJ /tmp/spdk_tgt_config.json.SG0 00:24:31.405 + echo 'INFO: JSON config files are the same' 00:24:31.405 INFO: JSON config files are the same 00:24:31.405 + rm /tmp/62.GaJ /tmp/spdk_tgt_config.json.SG0 00:24:31.405 + exit 0 00:24:31.405 INFO: changing configuration and checking if this can be detected... 00:24:31.405 05:35:51 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:24:31.405 05:35:51 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:24:31.405 05:35:51 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:24:31.405 05:35:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:24:31.664 05:35:51 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:24:31.664 05:35:51 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:24:31.664 05:35:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:24:31.664 + '[' 2 -ne 2 ']' 00:24:31.664 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:24:31.664 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:24:31.664 + rootdir=/home/vagrant/spdk_repo/spdk 00:24:31.664 +++ basename /dev/fd/62 00:24:31.664 ++ mktemp /tmp/62.XXX 00:24:31.664 + tmp_file_1=/tmp/62.OI4 00:24:31.664 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:24:31.664 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:24:31.664 + tmp_file_2=/tmp/spdk_tgt_config.json.rNW 00:24:31.664 + ret=0 00:24:31.664 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:24:32.230 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:24:32.230 + diff -u /tmp/62.OI4 /tmp/spdk_tgt_config.json.rNW 00:24:32.230 + ret=1 00:24:32.230 + echo '=== Start of file: /tmp/62.OI4 ===' 00:24:32.230 + cat /tmp/62.OI4 00:24:32.230 + echo '=== End of file: /tmp/62.OI4 ===' 00:24:32.230 + echo '' 00:24:32.230 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rNW ===' 00:24:32.230 + cat /tmp/spdk_tgt_config.json.rNW 00:24:32.230 + echo '=== End of file: /tmp/spdk_tgt_config.json.rNW ===' 00:24:32.230 + echo '' 00:24:32.230 + rm /tmp/62.OI4 /tmp/spdk_tgt_config.json.rNW 00:24:32.230 + exit 1 00:24:32.230 INFO: configuration change detected. 00:24:32.230 05:35:51 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:24:32.230 05:35:51 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:24:32.230 05:35:51 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:24:32.230 05:35:51 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:32.230 05:35:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:32.230 05:35:51 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:24:32.230 05:35:51 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:24:32.230 05:35:51 json_config -- json_config/json_config.sh@324 -- # [[ -n 59677 ]] 00:24:32.230 05:35:51 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:24:32.230 05:35:51 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:24:32.230 05:35:51 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:32.230 05:35:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:32.230 05:35:51 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:24:32.230 05:35:51 json_config -- json_config/json_config.sh@200 -- # uname -s 00:24:32.230 05:35:51 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:24:32.231 05:35:51 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:24:32.231 05:35:51 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:24:32.231 05:35:51 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:24:32.231 05:35:51 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:32.231 05:35:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:32.231 05:35:52 json_config -- json_config/json_config.sh@330 -- # killprocess 59677 00:24:32.231 05:35:52 json_config -- common/autotest_common.sh@952 -- # '[' -z 59677 ']' 00:24:32.231 05:35:52 json_config -- common/autotest_common.sh@956 -- # kill -0 59677 00:24:32.231 05:35:52 json_config -- common/autotest_common.sh@957 -- # uname 00:24:32.231 05:35:52 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:32.231 05:35:52 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59677 00:24:32.231 killing process with pid 59677 00:24:32.231 05:35:52 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:32.231 05:35:52 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:32.231 05:35:52 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59677' 00:24:32.231 05:35:52 json_config -- common/autotest_common.sh@971 -- # kill 59677 00:24:32.231 05:35:52 json_config -- common/autotest_common.sh@976 -- # wait 59677 00:24:32.489 05:35:52 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:24:32.489 05:35:52 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:24:32.489 05:35:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:32.489 05:35:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:32.489 INFO: Success 00:24:32.489 05:35:52 json_config -- json_config/json_config.sh@335 -- # return 0 00:24:32.489 05:35:52 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:24:32.489 ************************************ 00:24:32.489 END TEST json_config 00:24:32.489 ************************************ 00:24:32.489 00:24:32.489 real 0m9.848s 00:24:32.489 user 0m14.269s 00:24:32.489 sys 0m2.359s 00:24:32.489 05:35:52 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:32.489 05:35:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:24:32.489 05:35:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:24:32.489 05:35:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:32.489 05:35:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:32.489 05:35:52 -- common/autotest_common.sh@10 -- # set +x 00:24:32.489 ************************************ 00:24:32.489 START TEST json_config_extra_key 00:24:32.489 ************************************ 00:24:32.489 05:35:52 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:24:32.749 05:35:52 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:32.749 05:35:52 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:24:32.749 05:35:52 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:32.749 05:35:52 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.749 05:35:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:24:32.749 05:35:52 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.749 05:35:52 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:32.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.749 --rc genhtml_branch_coverage=1 00:24:32.749 --rc genhtml_function_coverage=1 00:24:32.749 --rc genhtml_legend=1 00:24:32.749 --rc geninfo_all_blocks=1 00:24:32.749 --rc geninfo_unexecuted_blocks=1 00:24:32.749 00:24:32.749 ' 00:24:32.749 05:35:52 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:32.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.749 --rc genhtml_branch_coverage=1 00:24:32.749 --rc genhtml_function_coverage=1 00:24:32.749 --rc genhtml_legend=1 00:24:32.749 --rc geninfo_all_blocks=1 00:24:32.749 --rc geninfo_unexecuted_blocks=1 00:24:32.749 00:24:32.749 ' 00:24:32.749 05:35:52 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:32.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.749 --rc genhtml_branch_coverage=1 00:24:32.749 --rc genhtml_function_coverage=1 00:24:32.749 --rc genhtml_legend=1 00:24:32.749 --rc geninfo_all_blocks=1 00:24:32.749 --rc geninfo_unexecuted_blocks=1 00:24:32.749 00:24:32.749 ' 00:24:32.749 05:35:52 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:32.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.749 --rc genhtml_branch_coverage=1 00:24:32.749 --rc genhtml_function_coverage=1 00:24:32.749 --rc genhtml_legend=1 00:24:32.749 --rc geninfo_all_blocks=1 00:24:32.749 --rc geninfo_unexecuted_blocks=1 00:24:32.749 00:24:32.749 ' 00:24:32.750 05:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:32.750 05:35:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.750 05:35:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.750 05:35:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.750 05:35:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.750 05:35:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.750 05:35:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.750 05:35:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.750 05:35:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:24:32.750 05:35:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:32.750 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.750 05:35:52 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.750 05:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:24:32.750 05:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:24:32.750 05:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:24:32.750 05:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:24:32.750 05:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:24:32.750 05:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:24:32.750 05:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:24:32.750 05:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:24:32.750 05:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:24:32.750 05:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:24:32.750 05:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:24:32.750 INFO: launching applications... 00:24:32.750 05:35:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:24:32.750 05:35:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:24:32.750 05:35:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:24:32.750 05:35:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:24:32.750 05:35:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:24:32.750 05:35:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:24:32.750 05:35:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:24:32.750 05:35:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:24:32.750 05:35:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59861 00:24:32.750 Waiting for target to run... 00:24:32.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:24:32.750 05:35:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:24:32.750 05:35:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59861 /var/tmp/spdk_tgt.sock 00:24:32.750 05:35:52 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 59861 ']' 00:24:32.750 05:35:52 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:24:32.750 05:35:52 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:32.750 05:35:52 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:24:32.750 05:35:52 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:24:32.750 05:35:52 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:32.750 05:35:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:24:32.750 [2024-11-20 05:35:52.656115] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:32.750 [2024-11-20 05:35:52.656219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59861 ] 00:24:33.317 [2024-11-20 05:35:53.048925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.317 [2024-11-20 05:35:53.103018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.885 00:24:33.885 INFO: shutting down applications... 00:24:33.885 05:35:53 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:33.885 05:35:53 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:24:33.885 05:35:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:24:33.885 05:35:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:24:33.885 05:35:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:24:33.885 05:35:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:24:33.885 05:35:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:24:33.885 05:35:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59861 ]] 00:24:33.885 05:35:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59861 00:24:33.885 05:35:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:24:33.885 05:35:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:24:33.885 05:35:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59861 00:24:33.885 05:35:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:24:34.449 05:35:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:24:34.449 05:35:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:24:34.449 05:35:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59861 00:24:34.449 05:35:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:24:34.449 05:35:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:24:34.449 05:35:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:24:34.449 05:35:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:24:34.449 SPDK target shutdown done 00:24:34.449 05:35:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:24:34.449 Success 00:24:34.449 00:24:34.449 real 0m1.863s 00:24:34.449 user 0m1.784s 00:24:34.449 sys 0m0.483s 00:24:34.449 05:35:54 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:34.449 ************************************ 00:24:34.449 END TEST json_config_extra_key 00:24:34.449 ************************************ 00:24:34.449 05:35:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:24:34.449 05:35:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:24:34.449 05:35:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:34.449 05:35:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:34.449 05:35:54 -- common/autotest_common.sh@10 -- # set +x 00:24:34.449 ************************************ 00:24:34.449 START TEST alias_rpc 00:24:34.449 ************************************ 00:24:34.449 05:35:54 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:24:34.708 * Looking for test storage... 00:24:34.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:24:34.708 05:35:54 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:34.708 05:35:54 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:24:34.708 05:35:54 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:34.708 05:35:54 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:34.708 05:35:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:24:34.708 05:35:54 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:34.708 05:35:54 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:34.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.708 --rc genhtml_branch_coverage=1 00:24:34.708 --rc genhtml_function_coverage=1 00:24:34.708 --rc genhtml_legend=1 00:24:34.708 --rc geninfo_all_blocks=1 00:24:34.708 --rc geninfo_unexecuted_blocks=1 00:24:34.708 00:24:34.708 ' 00:24:34.708 05:35:54 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:34.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.708 --rc genhtml_branch_coverage=1 00:24:34.708 --rc genhtml_function_coverage=1 00:24:34.708 --rc genhtml_legend=1 00:24:34.708 --rc geninfo_all_blocks=1 00:24:34.708 --rc geninfo_unexecuted_blocks=1 00:24:34.708 00:24:34.708 ' 00:24:34.708 05:35:54 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:34.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.708 --rc genhtml_branch_coverage=1 00:24:34.708 --rc genhtml_function_coverage=1 00:24:34.708 --rc genhtml_legend=1 00:24:34.708 --rc geninfo_all_blocks=1 00:24:34.708 --rc geninfo_unexecuted_blocks=1 00:24:34.708 00:24:34.708 ' 00:24:34.708 05:35:54 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:34.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.708 --rc genhtml_branch_coverage=1 00:24:34.708 --rc genhtml_function_coverage=1 00:24:34.708 --rc genhtml_legend=1 00:24:34.708 --rc geninfo_all_blocks=1 00:24:34.708 --rc geninfo_unexecuted_blocks=1 00:24:34.708 00:24:34.708 ' 00:24:34.708 05:35:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:24:34.708 05:35:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59951 00:24:34.708 05:35:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:34.708 05:35:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59951 00:24:34.708 05:35:54 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 59951 ']' 00:24:34.708 05:35:54 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.708 05:35:54 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:34.708 05:35:54 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.708 05:35:54 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:34.708 05:35:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:34.708 [2024-11-20 05:35:54.593471] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:34.708 [2024-11-20 05:35:54.593807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59951 ] 00:24:34.967 [2024-11-20 05:35:54.800639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.967 [2024-11-20 05:35:54.859994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.901 05:35:55 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:35.901 05:35:55 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:24:35.901 05:35:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:24:36.158 05:35:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59951 00:24:36.158 05:35:55 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 59951 ']' 00:24:36.158 05:35:55 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 59951 00:24:36.158 05:35:55 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:24:36.158 05:35:55 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:36.158 05:35:55 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59951 00:24:36.158 killing process with pid 59951 00:24:36.158 05:35:56 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:36.158 05:35:56 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:36.158 05:35:56 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59951' 00:24:36.158 05:35:56 alias_rpc -- common/autotest_common.sh@971 -- # kill 59951 00:24:36.158 05:35:56 alias_rpc -- common/autotest_common.sh@976 -- # wait 59951 00:24:36.416 ************************************ 00:24:36.416 END TEST alias_rpc 00:24:36.416 ************************************ 00:24:36.416 00:24:36.416 real 0m2.020s 00:24:36.416 user 0m2.375s 00:24:36.416 sys 0m0.486s 00:24:36.416 05:35:56 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:36.416 05:35:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:36.674 05:35:56 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:24:36.674 05:35:56 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:24:36.674 05:35:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:36.674 05:35:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:36.674 05:35:56 -- common/autotest_common.sh@10 -- # set +x 00:24:36.674 ************************************ 00:24:36.674 START TEST dpdk_mem_utility 00:24:36.674 ************************************ 00:24:36.674 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:24:36.674 * Looking for test storage... 00:24:36.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:24:36.674 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:36.674 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:24:36.674 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:36.674 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:24:36.674 05:35:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.675 05:35:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:24:36.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.933 05:35:56 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.933 05:35:56 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.933 05:35:56 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.933 05:35:56 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:24:36.933 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.933 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:36.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.933 --rc genhtml_branch_coverage=1 00:24:36.933 --rc genhtml_function_coverage=1 00:24:36.933 --rc genhtml_legend=1 00:24:36.933 --rc geninfo_all_blocks=1 00:24:36.933 --rc geninfo_unexecuted_blocks=1 00:24:36.933 00:24:36.933 ' 00:24:36.933 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:36.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.933 --rc genhtml_branch_coverage=1 00:24:36.933 --rc genhtml_function_coverage=1 00:24:36.933 --rc genhtml_legend=1 00:24:36.933 --rc geninfo_all_blocks=1 00:24:36.933 --rc geninfo_unexecuted_blocks=1 00:24:36.933 00:24:36.933 ' 00:24:36.933 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:36.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.933 --rc genhtml_branch_coverage=1 00:24:36.933 --rc genhtml_function_coverage=1 00:24:36.933 --rc genhtml_legend=1 00:24:36.933 --rc geninfo_all_blocks=1 00:24:36.933 --rc geninfo_unexecuted_blocks=1 00:24:36.933 00:24:36.933 ' 00:24:36.933 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:36.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.933 --rc genhtml_branch_coverage=1 00:24:36.933 --rc genhtml_function_coverage=1 00:24:36.933 --rc genhtml_legend=1 00:24:36.933 --rc geninfo_all_blocks=1 00:24:36.933 --rc geninfo_unexecuted_blocks=1 00:24:36.933 00:24:36.933 ' 00:24:36.933 05:35:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:24:36.933 05:35:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60051 00:24:36.933 05:35:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60051 00:24:36.933 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 60051 ']' 00:24:36.933 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.933 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:36.933 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.933 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:36.933 05:35:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:36.933 05:35:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:24:36.933 [2024-11-20 05:35:56.659076] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:36.933 [2024-11-20 05:35:56.659179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60051 ] 00:24:36.933 [2024-11-20 05:35:56.817168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.192 [2024-11-20 05:35:56.885854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.129 05:35:57 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:38.129 05:35:57 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:24:38.129 05:35:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:24:38.129 05:35:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:24:38.129 05:35:57 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.129 05:35:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:24:38.129 { 00:24:38.129 "filename": "/tmp/spdk_mem_dump.txt" 00:24:38.129 } 00:24:38.129 05:35:57 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.129 05:35:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:24:38.129 DPDK memory size 818.000000 MiB in 1 heap(s) 00:24:38.129 1 heaps totaling size 818.000000 MiB 00:24:38.129 size: 818.000000 MiB heap id: 0 00:24:38.129 end heaps---------- 00:24:38.129 9 mempools totaling size 603.782043 MiB 00:24:38.129 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:24:38.129 size: 158.602051 MiB name: PDU_data_out_Pool 00:24:38.129 size: 100.555481 MiB name: bdev_io_60051 00:24:38.129 size: 50.003479 MiB name: msgpool_60051 00:24:38.129 size: 36.509338 MiB name: fsdev_io_60051 00:24:38.129 size: 21.763794 MiB name: PDU_Pool 00:24:38.129 size: 19.513306 MiB name: SCSI_TASK_Pool 00:24:38.129 size: 4.133484 MiB name: evtpool_60051 00:24:38.129 size: 0.026123 MiB name: Session_Pool 00:24:38.129 end mempools------- 00:24:38.129 6 memzones totaling size 4.142822 MiB 00:24:38.129 size: 1.000366 MiB name: RG_ring_0_60051 00:24:38.129 size: 1.000366 MiB name: RG_ring_1_60051 00:24:38.129 size: 1.000366 MiB name: RG_ring_4_60051 00:24:38.129 size: 1.000366 MiB name: RG_ring_5_60051 00:24:38.129 size: 0.125366 MiB name: RG_ring_2_60051 00:24:38.129 size: 0.015991 MiB name: RG_ring_3_60051 00:24:38.129 end memzones------- 00:24:38.129 05:35:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:24:38.129 heap id: 0 total size: 818.000000 MiB number of busy elements: 237 number of free elements: 15 00:24:38.129 list of free elements. size: 10.817139 MiB 00:24:38.129 element at address: 0x200019200000 with size: 0.999878 MiB 00:24:38.129 element at address: 0x200019400000 with size: 0.999878 MiB 00:24:38.129 element at address: 0x200000400000 with size: 0.996338 MiB 00:24:38.129 element at address: 0x200032000000 with size: 0.994446 MiB 00:24:38.129 element at address: 0x200006400000 with size: 0.959839 MiB 00:24:38.129 element at address: 0x200012c00000 with size: 0.944275 MiB 00:24:38.129 element at address: 0x200019600000 with size: 0.936584 MiB 00:24:38.129 element at address: 0x200000200000 with size: 0.717346 MiB 00:24:38.129 element at address: 0x20001ae00000 with size: 0.570984 MiB 00:24:38.129 element at address: 0x200000c00000 with size: 0.490662 MiB 00:24:38.129 element at address: 0x20000a600000 with size: 0.489441 MiB 00:24:38.129 element at address: 0x200019800000 with size: 0.485657 MiB 00:24:38.129 element at address: 0x200003e00000 with size: 0.481018 MiB 00:24:38.129 element at address: 0x200028200000 with size: 0.397400 MiB 00:24:38.129 element at address: 0x200000800000 with size: 0.353394 MiB 00:24:38.129 list of standard malloc elements. size: 199.253967 MiB 00:24:38.129 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:24:38.129 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:24:38.129 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:24:38.129 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:24:38.129 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:24:38.129 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:24:38.129 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:24:38.129 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:24:38.129 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:24:38.129 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000085a780 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000085a980 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000087f080 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000087f140 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000087f200 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000087f380 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000087f440 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000087f500 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000087f680 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000cff000 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200003efb980 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:24:38.129 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:24:38.129 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:24:38.130 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:24:38.130 element at address: 0x200028265bc0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x200028265c80 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826c880 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826d080 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826d140 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826d200 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826d380 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826d440 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826d500 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826d680 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826d740 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826d800 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826d980 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826da40 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826db00 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826de00 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826df80 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826e040 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826e100 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826e280 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826e340 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826e400 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826e580 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826e640 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826e700 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826e880 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826e940 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826f000 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826f180 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826f240 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826f300 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826f480 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826f540 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826f600 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826f780 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826f840 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826f900 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:24:38.130 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:24:38.130 list of memzone associated elements. size: 607.928894 MiB 00:24:38.130 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:24:38.130 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:24:38.130 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:24:38.130 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:24:38.130 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:24:38.130 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_60051_0 00:24:38.130 element at address: 0x200000dff380 with size: 48.003052 MiB 00:24:38.130 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60051_0 00:24:38.130 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:24:38.130 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_60051_0 00:24:38.130 element at address: 0x2000199be940 with size: 20.255554 MiB 00:24:38.130 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:24:38.130 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:24:38.130 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:24:38.130 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:24:38.130 associated memzone info: size: 3.000122 MiB name: MP_evtpool_60051_0 00:24:38.130 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:24:38.130 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60051 00:24:38.130 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:24:38.130 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60051 00:24:38.130 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:24:38.130 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:24:38.130 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:24:38.130 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:24:38.130 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:24:38.130 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:24:38.130 element at address: 0x200003efba40 with size: 1.008118 MiB 00:24:38.130 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:24:38.130 element at address: 0x200000cff180 with size: 1.000488 MiB 00:24:38.130 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60051 00:24:38.130 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:24:38.130 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60051 00:24:38.130 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:24:38.130 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60051 00:24:38.130 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:24:38.130 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60051 00:24:38.130 element at address: 0x20000087f740 with size: 0.500488 MiB 00:24:38.130 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_60051 00:24:38.130 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:24:38.130 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60051 00:24:38.130 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:24:38.130 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:24:38.130 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:24:38.130 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:24:38.130 element at address: 0x20001987c540 with size: 0.250488 MiB 00:24:38.130 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:24:38.130 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:24:38.130 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_60051 00:24:38.130 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:24:38.130 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60051 00:24:38.130 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:24:38.130 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:24:38.130 element at address: 0x200028265d40 with size: 0.023743 MiB 00:24:38.130 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:24:38.130 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:24:38.130 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60051 00:24:38.130 element at address: 0x20002826be80 with size: 0.002441 MiB 00:24:38.130 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:24:38.130 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:24:38.130 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60051 00:24:38.130 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:24:38.130 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_60051 00:24:38.130 element at address: 0x20000085a840 with size: 0.000305 MiB 00:24:38.130 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60051 00:24:38.130 element at address: 0x20002826c940 with size: 0.000305 MiB 00:24:38.130 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:24:38.130 05:35:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:24:38.130 05:35:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60051 00:24:38.130 05:35:57 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 60051 ']' 00:24:38.130 05:35:57 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 60051 00:24:38.130 05:35:57 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:24:38.130 05:35:57 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:38.130 05:35:57 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60051 00:24:38.130 killing process with pid 60051 00:24:38.130 05:35:57 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:38.130 05:35:57 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:38.130 05:35:57 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60051' 00:24:38.130 05:35:57 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 60051 00:24:38.130 05:35:57 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 60051 00:24:38.388 00:24:38.388 real 0m1.917s 00:24:38.388 user 0m2.175s 00:24:38.388 sys 0m0.483s 00:24:38.388 05:35:58 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:38.388 ************************************ 00:24:38.388 END TEST dpdk_mem_utility 00:24:38.388 ************************************ 00:24:38.388 05:35:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:24:38.646 05:35:58 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:24:38.646 05:35:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:38.646 05:35:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:38.646 05:35:58 -- common/autotest_common.sh@10 -- # set +x 00:24:38.646 ************************************ 00:24:38.646 START TEST event 00:24:38.646 ************************************ 00:24:38.646 05:35:58 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:24:38.646 * Looking for test storage... 00:24:38.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:24:38.646 05:35:58 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:38.646 05:35:58 event -- common/autotest_common.sh@1691 -- # lcov --version 00:24:38.646 05:35:58 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:38.905 05:35:58 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:38.905 05:35:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.905 05:35:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.905 05:35:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.905 05:35:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.905 05:35:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.905 05:35:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.905 05:35:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.905 05:35:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.905 05:35:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.905 05:35:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.905 05:35:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.905 05:35:58 event -- scripts/common.sh@344 -- # case "$op" in 00:24:38.905 05:35:58 event -- scripts/common.sh@345 -- # : 1 00:24:38.905 05:35:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.905 05:35:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.905 05:35:58 event -- scripts/common.sh@365 -- # decimal 1 00:24:38.905 05:35:58 event -- scripts/common.sh@353 -- # local d=1 00:24:38.905 05:35:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.905 05:35:58 event -- scripts/common.sh@355 -- # echo 1 00:24:38.905 05:35:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.905 05:35:58 event -- scripts/common.sh@366 -- # decimal 2 00:24:38.905 05:35:58 event -- scripts/common.sh@353 -- # local d=2 00:24:38.905 05:35:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.905 05:35:58 event -- scripts/common.sh@355 -- # echo 2 00:24:38.905 05:35:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.905 05:35:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.905 05:35:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.905 05:35:58 event -- scripts/common.sh@368 -- # return 0 00:24:38.905 05:35:58 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.905 05:35:58 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:38.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.905 --rc genhtml_branch_coverage=1 00:24:38.905 --rc genhtml_function_coverage=1 00:24:38.905 --rc genhtml_legend=1 00:24:38.905 --rc geninfo_all_blocks=1 00:24:38.905 --rc geninfo_unexecuted_blocks=1 00:24:38.905 00:24:38.905 ' 00:24:38.905 05:35:58 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:38.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.905 --rc genhtml_branch_coverage=1 00:24:38.905 --rc genhtml_function_coverage=1 00:24:38.905 --rc genhtml_legend=1 00:24:38.905 --rc geninfo_all_blocks=1 00:24:38.905 --rc geninfo_unexecuted_blocks=1 00:24:38.905 00:24:38.905 ' 00:24:38.905 05:35:58 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:38.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.905 --rc genhtml_branch_coverage=1 00:24:38.905 --rc genhtml_function_coverage=1 00:24:38.905 --rc genhtml_legend=1 00:24:38.905 --rc geninfo_all_blocks=1 00:24:38.905 --rc geninfo_unexecuted_blocks=1 00:24:38.905 00:24:38.905 ' 00:24:38.905 05:35:58 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:38.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.905 --rc genhtml_branch_coverage=1 00:24:38.905 --rc genhtml_function_coverage=1 00:24:38.905 --rc genhtml_legend=1 00:24:38.905 --rc geninfo_all_blocks=1 00:24:38.905 --rc geninfo_unexecuted_blocks=1 00:24:38.905 00:24:38.905 ' 00:24:38.905 05:35:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:38.905 05:35:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:24:38.905 05:35:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:24:38.905 05:35:58 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:24:38.905 05:35:58 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:38.905 05:35:58 event -- common/autotest_common.sh@10 -- # set +x 00:24:38.905 ************************************ 00:24:38.905 START TEST event_perf 00:24:38.905 ************************************ 00:24:38.905 05:35:58 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:24:38.905 Running I/O for 1 seconds...[2024-11-20 05:35:58.616423] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:38.905 [2024-11-20 05:35:58.616632] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60154 ] 00:24:38.905 [2024-11-20 05:35:58.760284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.905 [2024-11-20 05:35:58.819029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.905 [2024-11-20 05:35:58.819086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.905 [2024-11-20 05:35:58.819188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.905 [2024-11-20 05:35:58.819191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.279 Running I/O for 1 seconds... 00:24:40.279 lcore 0: 182307 00:24:40.279 lcore 1: 182305 00:24:40.279 lcore 2: 182307 00:24:40.279 lcore 3: 182306 00:24:40.279 done. 00:24:40.279 00:24:40.279 real 0m1.276s 00:24:40.279 ************************************ 00:24:40.279 END TEST event_perf 00:24:40.279 ************************************ 00:24:40.279 user 0m4.104s 00:24:40.279 sys 0m0.045s 00:24:40.279 05:35:59 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:40.279 05:35:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:24:40.279 05:35:59 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:24:40.279 05:35:59 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:24:40.279 05:35:59 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:40.279 05:35:59 event -- common/autotest_common.sh@10 -- # set +x 00:24:40.279 ************************************ 00:24:40.279 START TEST event_reactor 00:24:40.279 ************************************ 00:24:40.279 05:35:59 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:24:40.279 [2024-11-20 05:35:59.950497] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:40.279 [2024-11-20 05:35:59.950600] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60187 ] 00:24:40.279 [2024-11-20 05:36:00.105979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.279 [2024-11-20 05:36:00.162722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.673 test_start 00:24:41.673 oneshot 00:24:41.673 tick 100 00:24:41.673 tick 100 00:24:41.673 tick 250 00:24:41.673 tick 100 00:24:41.673 tick 100 00:24:41.673 tick 250 00:24:41.673 tick 100 00:24:41.673 tick 500 00:24:41.673 tick 100 00:24:41.673 tick 100 00:24:41.673 tick 250 00:24:41.673 tick 100 00:24:41.673 tick 100 00:24:41.673 test_end 00:24:41.673 00:24:41.673 real 0m1.282s 00:24:41.673 user 0m1.119s 00:24:41.673 sys 0m0.055s 00:24:41.673 05:36:01 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:41.673 ************************************ 00:24:41.673 END TEST event_reactor 00:24:41.673 ************************************ 00:24:41.673 05:36:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:24:41.673 05:36:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:24:41.673 05:36:01 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:24:41.673 05:36:01 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:41.673 05:36:01 event -- common/autotest_common.sh@10 -- # set +x 00:24:41.673 ************************************ 00:24:41.673 START TEST event_reactor_perf 00:24:41.673 ************************************ 00:24:41.673 05:36:01 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:24:41.673 [2024-11-20 05:36:01.283465] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:41.673 [2024-11-20 05:36:01.283563] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60229 ] 00:24:41.673 [2024-11-20 05:36:01.432690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.673 [2024-11-20 05:36:01.500643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.047 test_start 00:24:43.047 test_end 00:24:43.047 Performance: 391096 events per second 00:24:43.047 00:24:43.047 real 0m1.282s 00:24:43.047 user 0m1.122s 00:24:43.047 sys 0m0.054s 00:24:43.047 05:36:02 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:43.047 ************************************ 00:24:43.047 END TEST event_reactor_perf 00:24:43.047 ************************************ 00:24:43.047 05:36:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:24:43.047 05:36:02 event -- event/event.sh@49 -- # uname -s 00:24:43.047 05:36:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:24:43.047 05:36:02 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:24:43.047 05:36:02 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:43.048 05:36:02 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:43.048 05:36:02 event -- common/autotest_common.sh@10 -- # set +x 00:24:43.048 ************************************ 00:24:43.048 START TEST event_scheduler 00:24:43.048 ************************************ 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:24:43.048 * Looking for test storage... 00:24:43.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:24:43.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:43.048 05:36:02 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:43.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.048 --rc genhtml_branch_coverage=1 00:24:43.048 --rc genhtml_function_coverage=1 00:24:43.048 --rc genhtml_legend=1 00:24:43.048 --rc geninfo_all_blocks=1 00:24:43.048 --rc geninfo_unexecuted_blocks=1 00:24:43.048 00:24:43.048 ' 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:43.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.048 --rc genhtml_branch_coverage=1 00:24:43.048 --rc genhtml_function_coverage=1 00:24:43.048 --rc genhtml_legend=1 00:24:43.048 --rc geninfo_all_blocks=1 00:24:43.048 --rc geninfo_unexecuted_blocks=1 00:24:43.048 00:24:43.048 ' 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:43.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.048 --rc genhtml_branch_coverage=1 00:24:43.048 --rc genhtml_function_coverage=1 00:24:43.048 --rc genhtml_legend=1 00:24:43.048 --rc geninfo_all_blocks=1 00:24:43.048 --rc geninfo_unexecuted_blocks=1 00:24:43.048 00:24:43.048 ' 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:43.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.048 --rc genhtml_branch_coverage=1 00:24:43.048 --rc genhtml_function_coverage=1 00:24:43.048 --rc genhtml_legend=1 00:24:43.048 --rc geninfo_all_blocks=1 00:24:43.048 --rc geninfo_unexecuted_blocks=1 00:24:43.048 00:24:43.048 ' 00:24:43.048 05:36:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:24:43.048 05:36:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60293 00:24:43.048 05:36:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:24:43.048 05:36:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60293 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 60293 ']' 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.048 05:36:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:43.048 05:36:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:24:43.048 [2024-11-20 05:36:02.890496] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:43.048 [2024-11-20 05:36:02.890817] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60293 ] 00:24:43.307 [2024-11-20 05:36:03.045005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:43.307 [2024-11-20 05:36:03.108651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.307 [2024-11-20 05:36:03.108749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.307 [2024-11-20 05:36:03.108751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.307 [2024-11-20 05:36:03.108692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.287 05:36:03 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:44.287 05:36:03 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:24:44.287 05:36:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:24:44.287 05:36:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.287 05:36:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:24:44.287 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:24:44.287 POWER: Cannot set governor of lcore 0 to userspace 00:24:44.287 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:24:44.287 POWER: Cannot set governor of lcore 0 to performance 00:24:44.287 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:24:44.287 POWER: Cannot set governor of lcore 0 to userspace 00:24:44.287 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:24:44.287 POWER: Cannot set governor of lcore 0 to userspace 00:24:44.287 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:24:44.287 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:24:44.287 POWER: Unable to set Power Management Environment for lcore 0 00:24:44.287 [2024-11-20 05:36:03.951633] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:24:44.287 [2024-11-20 05:36:03.951649] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:24:44.287 [2024-11-20 05:36:03.951659] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:24:44.287 [2024-11-20 05:36:03.951674] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:24:44.287 [2024-11-20 05:36:03.951688] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:24:44.287 [2024-11-20 05:36:03.951706] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:24:44.287 05:36:03 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.287 05:36:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:24:44.287 05:36:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.287 05:36:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:24:44.288 [2024-11-20 05:36:04.036350] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:24:44.288 05:36:04 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.288 05:36:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:24:44.288 05:36:04 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:44.288 05:36:04 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:44.288 05:36:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:24:44.288 ************************************ 00:24:44.288 START TEST scheduler_create_thread 00:24:44.288 ************************************ 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:44.288 2 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:44.288 3 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:44.288 4 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:44.288 5 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:44.288 6 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:44.288 7 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:44.288 8 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:44.288 9 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:44.288 10 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.288 05:36:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:45.223 05:36:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.223 05:36:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:24:45.223 05:36:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.223 05:36:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:46.600 05:36:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.600 05:36:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:24:46.600 05:36:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:24:46.600 05:36:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.600 05:36:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:47.539 ************************************ 00:24:47.539 END TEST scheduler_create_thread 00:24:47.539 ************************************ 00:24:47.539 05:36:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.539 00:24:47.539 real 0m3.377s 00:24:47.539 user 0m0.019s 00:24:47.539 sys 0m0.009s 00:24:47.539 05:36:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:47.539 05:36:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:47.808 05:36:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:47.808 05:36:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60293 00:24:47.808 05:36:07 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 60293 ']' 00:24:47.808 05:36:07 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 60293 00:24:47.808 05:36:07 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:24:47.808 05:36:07 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:47.808 05:36:07 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60293 00:24:47.808 killing process with pid 60293 00:24:47.808 05:36:07 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:47.808 05:36:07 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:47.808 05:36:07 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60293' 00:24:47.808 05:36:07 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 60293 00:24:47.808 05:36:07 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 60293 00:24:48.066 [2024-11-20 05:36:07.809550] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:24:48.324 00:24:48.324 real 0m5.423s 00:24:48.324 user 0m11.249s 00:24:48.324 sys 0m0.467s 00:24:48.324 ************************************ 00:24:48.324 END TEST event_scheduler 00:24:48.324 ************************************ 00:24:48.324 05:36:08 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:48.324 05:36:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:24:48.324 05:36:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:24:48.324 05:36:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:24:48.324 05:36:08 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:48.324 05:36:08 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:48.324 05:36:08 event -- common/autotest_common.sh@10 -- # set +x 00:24:48.324 ************************************ 00:24:48.324 START TEST app_repeat 00:24:48.324 ************************************ 00:24:48.324 05:36:08 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:24:48.324 05:36:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:48.324 05:36:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:48.324 05:36:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:24:48.324 05:36:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:24:48.324 05:36:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:24:48.324 05:36:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:24:48.324 05:36:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:24:48.324 Process app_repeat pid: 60416 00:24:48.324 05:36:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60416 00:24:48.324 05:36:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:24:48.324 05:36:08 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:24:48.324 05:36:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60416' 00:24:48.324 spdk_app_start Round 0 00:24:48.324 05:36:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:24:48.324 05:36:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:24:48.324 05:36:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60416 /var/tmp/spdk-nbd.sock 00:24:48.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:48.324 05:36:08 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 60416 ']' 00:24:48.324 05:36:08 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:48.324 05:36:08 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:48.324 05:36:08 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:48.324 05:36:08 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:48.324 05:36:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:24:48.324 [2024-11-20 05:36:08.129002] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:24:48.324 [2024-11-20 05:36:08.129400] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60416 ] 00:24:48.582 [2024-11-20 05:36:08.289652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:48.582 [2024-11-20 05:36:08.360952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.582 [2024-11-20 05:36:08.360963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.582 05:36:08 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:48.582 05:36:08 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:24:48.582 05:36:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:48.841 Malloc0 00:24:48.841 05:36:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:49.409 Malloc1 00:24:49.409 05:36:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:49.409 05:36:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:49.409 05:36:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:24:49.409 05:36:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:49.409 05:36:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:49.409 05:36:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:49.409 05:36:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:49.409 05:36:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:49.409 05:36:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:24:49.409 05:36:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:49.409 05:36:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:49.409 05:36:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:49.409 05:36:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:24:49.409 05:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:49.409 05:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:49.409 05:36:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:24:49.409 /dev/nbd0 00:24:49.667 05:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:49.667 05:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:49.667 05:36:09 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:49.667 05:36:09 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:24:49.667 05:36:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:49.667 05:36:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:49.667 05:36:09 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:49.667 05:36:09 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:24:49.667 05:36:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:49.667 05:36:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:49.667 05:36:09 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:49.667 1+0 records in 00:24:49.667 1+0 records out 00:24:49.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569096 s, 7.2 MB/s 00:24:49.668 05:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:49.668 05:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:24:49.668 05:36:09 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:49.668 05:36:09 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:49.668 05:36:09 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:24:49.668 05:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:49.668 05:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:49.668 05:36:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:24:49.926 /dev/nbd1 00:24:49.926 05:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:49.926 05:36:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:49.926 05:36:09 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:24:49.926 05:36:09 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:24:49.926 05:36:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:49.926 05:36:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:49.926 05:36:09 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:24:49.926 05:36:09 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:24:49.927 05:36:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:49.927 05:36:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:49.927 05:36:09 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:49.927 1+0 records in 00:24:49.927 1+0 records out 00:24:49.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326669 s, 12.5 MB/s 00:24:49.927 05:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:49.927 05:36:09 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:24:49.927 05:36:09 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:49.927 05:36:09 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:49.927 05:36:09 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:24:49.927 05:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:49.927 05:36:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:49.927 05:36:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:49.927 05:36:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:49.927 05:36:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:50.185 05:36:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:50.185 { 00:24:50.185 "bdev_name": "Malloc0", 00:24:50.185 "nbd_device": "/dev/nbd0" 00:24:50.185 }, 00:24:50.185 { 00:24:50.185 "bdev_name": "Malloc1", 00:24:50.185 "nbd_device": "/dev/nbd1" 00:24:50.185 } 00:24:50.185 ]' 00:24:50.185 05:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:50.185 { 00:24:50.185 "bdev_name": "Malloc0", 00:24:50.186 "nbd_device": "/dev/nbd0" 00:24:50.186 }, 00:24:50.186 { 00:24:50.186 "bdev_name": "Malloc1", 00:24:50.186 "nbd_device": "/dev/nbd1" 00:24:50.186 } 00:24:50.186 ]' 00:24:50.186 05:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:50.186 05:36:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:24:50.186 /dev/nbd1' 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:24:50.445 /dev/nbd1' 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:24:50.445 256+0 records in 00:24:50.445 256+0 records out 00:24:50.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00872748 s, 120 MB/s 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:50.445 256+0 records in 00:24:50.445 256+0 records out 00:24:50.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279051 s, 37.6 MB/s 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:24:50.445 256+0 records in 00:24:50.445 256+0 records out 00:24:50.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0330951 s, 31.7 MB/s 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.445 05:36:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:50.704 05:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:50.704 05:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:50.704 05:36:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:50.704 05:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.704 05:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.704 05:36:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:50.704 05:36:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:24:50.704 05:36:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.704 05:36:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.704 05:36:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:51.013 05:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:51.013 05:36:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:51.013 05:36:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:51.013 05:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:51.013 05:36:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:51.013 05:36:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:51.013 05:36:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:24:51.013 05:36:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:24:51.013 05:36:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:51.013 05:36:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:51.013 05:36:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:51.580 05:36:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:51.580 05:36:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:51.580 05:36:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:51.580 05:36:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:51.580 05:36:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:51.580 05:36:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:24:51.580 05:36:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:24:51.580 05:36:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:24:51.580 05:36:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:24:51.580 05:36:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:24:51.580 05:36:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:51.580 05:36:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:24:51.580 05:36:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:24:51.580 05:36:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:24:51.838 [2024-11-20 05:36:11.608549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:51.838 [2024-11-20 05:36:11.660834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.838 [2024-11-20 05:36:11.660852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.838 [2024-11-20 05:36:11.703348] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:24:51.838 [2024-11-20 05:36:11.703406] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:24:55.122 05:36:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:24:55.122 spdk_app_start Round 1 00:24:55.122 05:36:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:24:55.122 05:36:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60416 /var/tmp/spdk-nbd.sock 00:24:55.122 05:36:14 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 60416 ']' 00:24:55.122 05:36:14 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:55.122 05:36:14 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:55.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:55.122 05:36:14 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:55.122 05:36:14 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:55.122 05:36:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:24:55.122 05:36:14 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:55.122 05:36:14 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:24:55.122 05:36:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:55.380 Malloc0 00:24:55.380 05:36:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:55.639 Malloc1 00:24:55.639 05:36:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:55.639 05:36:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:55.639 05:36:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:24:55.639 05:36:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:55.639 05:36:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:55.639 05:36:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:55.639 05:36:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:55.639 05:36:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:55.639 05:36:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:24:55.639 05:36:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:55.639 05:36:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:55.639 05:36:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:55.639 05:36:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:24:55.639 05:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:55.639 05:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:55.639 05:36:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:24:55.898 /dev/nbd0 00:24:55.898 05:36:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:55.898 05:36:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:55.898 05:36:15 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:55.898 05:36:15 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:24:55.898 05:36:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:55.898 05:36:15 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:55.898 05:36:15 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:55.898 05:36:15 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:24:55.898 05:36:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:55.898 05:36:15 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:55.898 05:36:15 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:55.898 1+0 records in 00:24:55.898 1+0 records out 00:24:55.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207756 s, 19.7 MB/s 00:24:55.898 05:36:15 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:55.898 05:36:15 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:24:55.898 05:36:15 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:55.898 05:36:15 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:55.898 05:36:15 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:24:55.898 05:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:55.898 05:36:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:55.898 05:36:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:24:56.156 /dev/nbd1 00:24:56.156 05:36:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:56.156 05:36:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:56.156 05:36:16 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:24:56.156 05:36:16 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:24:56.156 05:36:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:56.156 05:36:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:56.156 05:36:16 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:24:56.156 05:36:16 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:24:56.156 05:36:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:56.156 05:36:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:56.156 05:36:16 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:56.156 1+0 records in 00:24:56.156 1+0 records out 00:24:56.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405624 s, 10.1 MB/s 00:24:56.156 05:36:16 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:56.157 05:36:16 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:24:56.157 05:36:16 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:56.157 05:36:16 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:56.157 05:36:16 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:24:56.157 05:36:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:56.157 05:36:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:56.157 05:36:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:56.157 05:36:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:56.157 05:36:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:56.723 { 00:24:56.723 "bdev_name": "Malloc0", 00:24:56.723 "nbd_device": "/dev/nbd0" 00:24:56.723 }, 00:24:56.723 { 00:24:56.723 "bdev_name": "Malloc1", 00:24:56.723 "nbd_device": "/dev/nbd1" 00:24:56.723 } 00:24:56.723 ]' 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:56.723 { 00:24:56.723 "bdev_name": "Malloc0", 00:24:56.723 "nbd_device": "/dev/nbd0" 00:24:56.723 }, 00:24:56.723 { 00:24:56.723 "bdev_name": "Malloc1", 00:24:56.723 "nbd_device": "/dev/nbd1" 00:24:56.723 } 00:24:56.723 ]' 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:24:56.723 /dev/nbd1' 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:24:56.723 /dev/nbd1' 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:24:56.723 256+0 records in 00:24:56.723 256+0 records out 00:24:56.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650108 s, 161 MB/s 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:56.723 256+0 records in 00:24:56.723 256+0 records out 00:24:56.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023362 s, 44.9 MB/s 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:24:56.723 256+0 records in 00:24:56.723 256+0 records out 00:24:56.723 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0346683 s, 30.2 MB/s 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:56.723 05:36:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:56.982 05:36:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:56.982 05:36:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:56.982 05:36:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:56.982 05:36:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:56.982 05:36:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:56.982 05:36:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:56.982 05:36:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:24:56.982 05:36:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:24:56.982 05:36:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:56.982 05:36:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:57.241 05:36:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:57.241 05:36:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:57.241 05:36:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:57.241 05:36:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:57.241 05:36:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:57.241 05:36:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:57.241 05:36:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:24:57.241 05:36:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:24:57.241 05:36:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:57.241 05:36:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:57.241 05:36:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:57.499 05:36:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:57.499 05:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:57.499 05:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:57.499 05:36:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:57.499 05:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:57.499 05:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:24:57.499 05:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:24:57.499 05:36:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:24:57.499 05:36:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:24:57.499 05:36:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:24:57.499 05:36:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:57.499 05:36:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:24:57.499 05:36:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:24:57.757 05:36:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:24:58.041 [2024-11-20 05:36:17.738679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:58.041 [2024-11-20 05:36:17.793094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.041 [2024-11-20 05:36:17.793100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.041 [2024-11-20 05:36:17.838394] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:24:58.041 [2024-11-20 05:36:17.838469] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:25:01.347 spdk_app_start Round 2 00:25:01.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:01.347 05:36:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:25:01.347 05:36:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:25:01.347 05:36:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60416 /var/tmp/spdk-nbd.sock 00:25:01.347 05:36:20 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 60416 ']' 00:25:01.347 05:36:20 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:01.347 05:36:20 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:01.347 05:36:20 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:01.347 05:36:20 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:01.347 05:36:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:25:01.347 05:36:20 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:01.347 05:36:20 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:25:01.347 05:36:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:25:01.347 Malloc0 00:25:01.347 05:36:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:25:01.606 Malloc1 00:25:01.606 05:36:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:25:01.606 05:36:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:01.606 05:36:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:25:01.606 05:36:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:25:01.606 05:36:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:01.606 05:36:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:25:01.606 05:36:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:25:01.606 05:36:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:01.606 05:36:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:25:01.606 05:36:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:01.606 05:36:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:01.606 05:36:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:01.606 05:36:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:25:01.606 05:36:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:01.606 05:36:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:01.606 05:36:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:25:01.864 /dev/nbd0 00:25:01.864 05:36:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:01.864 05:36:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:01.864 05:36:21 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:01.864 05:36:21 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:25:01.864 05:36:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:01.864 05:36:21 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:01.864 05:36:21 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:01.864 05:36:21 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:25:01.864 05:36:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:01.864 05:36:21 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:01.864 05:36:21 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:25:01.864 1+0 records in 00:25:01.864 1+0 records out 00:25:01.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253996 s, 16.1 MB/s 00:25:01.864 05:36:21 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:25:01.864 05:36:21 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:25:01.864 05:36:21 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:25:01.864 05:36:21 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:01.864 05:36:21 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:25:01.864 05:36:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:01.864 05:36:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:01.864 05:36:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:25:02.122 /dev/nbd1 00:25:02.381 05:36:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:02.381 05:36:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:02.381 05:36:22 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:25:02.381 05:36:22 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:25:02.381 05:36:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:02.381 05:36:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:02.381 05:36:22 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:25:02.381 05:36:22 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:25:02.381 05:36:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:02.381 05:36:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:02.381 05:36:22 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:25:02.381 1+0 records in 00:25:02.381 1+0 records out 00:25:02.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343345 s, 11.9 MB/s 00:25:02.381 05:36:22 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:25:02.381 05:36:22 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:25:02.381 05:36:22 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:25:02.381 05:36:22 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:02.381 05:36:22 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:25:02.381 05:36:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:02.381 05:36:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:02.381 05:36:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:02.381 05:36:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:02.381 05:36:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:25:02.641 { 00:25:02.641 "bdev_name": "Malloc0", 00:25:02.641 "nbd_device": "/dev/nbd0" 00:25:02.641 }, 00:25:02.641 { 00:25:02.641 "bdev_name": "Malloc1", 00:25:02.641 "nbd_device": "/dev/nbd1" 00:25:02.641 } 00:25:02.641 ]' 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:25:02.641 { 00:25:02.641 "bdev_name": "Malloc0", 00:25:02.641 "nbd_device": "/dev/nbd0" 00:25:02.641 }, 00:25:02.641 { 00:25:02.641 "bdev_name": "Malloc1", 00:25:02.641 "nbd_device": "/dev/nbd1" 00:25:02.641 } 00:25:02.641 ]' 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:25:02.641 /dev/nbd1' 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:25:02.641 /dev/nbd1' 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:25:02.641 256+0 records in 00:25:02.641 256+0 records out 00:25:02.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00973574 s, 108 MB/s 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:25:02.641 256+0 records in 00:25:02.641 256+0 records out 00:25:02.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224669 s, 46.7 MB/s 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:25:02.641 256+0 records in 00:25:02.641 256+0 records out 00:25:02.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251081 s, 41.8 MB/s 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:25:02.641 05:36:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:25:02.900 05:36:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:25:02.900 05:36:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:02.900 05:36:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:02.900 05:36:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:02.900 05:36:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:25:02.900 05:36:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:02.900 05:36:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:03.159 05:36:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:03.159 05:36:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:03.159 05:36:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:03.159 05:36:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:03.159 05:36:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:03.159 05:36:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:03.159 05:36:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:25:03.159 05:36:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:25:03.159 05:36:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:03.159 05:36:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:25:03.417 05:36:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:03.417 05:36:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:03.417 05:36:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:03.418 05:36:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:03.418 05:36:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:03.418 05:36:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:03.418 05:36:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:25:03.418 05:36:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:25:03.418 05:36:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:03.418 05:36:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:03.418 05:36:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:03.676 05:36:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:03.676 05:36:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:03.676 05:36:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:03.676 05:36:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:03.676 05:36:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:25:03.676 05:36:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:03.676 05:36:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:25:03.676 05:36:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:25:03.676 05:36:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:25:03.676 05:36:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:25:03.676 05:36:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:25:03.676 05:36:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:25:03.676 05:36:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:25:04.245 05:36:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:25:04.245 [2024-11-20 05:36:24.004261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:04.245 [2024-11-20 05:36:24.060566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.245 [2024-11-20 05:36:24.060570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.245 [2024-11-20 05:36:24.103725] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:25:04.245 [2024-11-20 05:36:24.103782] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:25:07.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:07.527 05:36:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60416 /var/tmp/spdk-nbd.sock 00:25:07.527 05:36:26 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 60416 ']' 00:25:07.527 05:36:26 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:07.527 05:36:26 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:07.527 05:36:26 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:07.527 05:36:26 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:07.527 05:36:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:25:07.527 05:36:27 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:07.527 05:36:27 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:25:07.527 05:36:27 event.app_repeat -- event/event.sh@39 -- # killprocess 60416 00:25:07.527 05:36:27 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 60416 ']' 00:25:07.527 05:36:27 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 60416 00:25:07.527 05:36:27 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:25:07.527 05:36:27 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:07.527 05:36:27 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60416 00:25:07.527 killing process with pid 60416 00:25:07.527 05:36:27 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:07.527 05:36:27 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:07.527 05:36:27 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60416' 00:25:07.527 05:36:27 event.app_repeat -- common/autotest_common.sh@971 -- # kill 60416 00:25:07.527 05:36:27 event.app_repeat -- common/autotest_common.sh@976 -- # wait 60416 00:25:07.527 spdk_app_start is called in Round 0. 00:25:07.527 Shutdown signal received, stop current app iteration 00:25:07.527 Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 reinitialization... 00:25:07.527 spdk_app_start is called in Round 1. 00:25:07.527 Shutdown signal received, stop current app iteration 00:25:07.527 Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 reinitialization... 00:25:07.527 spdk_app_start is called in Round 2. 00:25:07.527 Shutdown signal received, stop current app iteration 00:25:07.527 Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 reinitialization... 00:25:07.527 spdk_app_start is called in Round 3. 00:25:07.527 Shutdown signal received, stop current app iteration 00:25:07.527 05:36:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:25:07.527 05:36:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:25:07.527 00:25:07.527 real 0m19.277s 00:25:07.527 user 0m43.366s 00:25:07.527 sys 0m3.767s 00:25:07.527 05:36:27 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:07.527 ************************************ 00:25:07.527 END TEST app_repeat 00:25:07.527 ************************************ 00:25:07.527 05:36:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:25:07.527 05:36:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:25:07.527 05:36:27 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:25:07.527 05:36:27 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:07.527 05:36:27 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:07.527 05:36:27 event -- common/autotest_common.sh@10 -- # set +x 00:25:07.527 ************************************ 00:25:07.527 START TEST cpu_locks 00:25:07.527 ************************************ 00:25:07.527 05:36:27 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:25:07.785 * Looking for test storage... 00:25:07.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:25:07.785 05:36:27 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:07.785 05:36:27 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:25:07.785 05:36:27 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:07.785 05:36:27 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.785 05:36:27 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:25:07.785 05:36:27 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.785 05:36:27 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:07.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.785 --rc genhtml_branch_coverage=1 00:25:07.785 --rc genhtml_function_coverage=1 00:25:07.785 --rc genhtml_legend=1 00:25:07.785 --rc geninfo_all_blocks=1 00:25:07.785 --rc geninfo_unexecuted_blocks=1 00:25:07.785 00:25:07.785 ' 00:25:07.785 05:36:27 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:07.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.785 --rc genhtml_branch_coverage=1 00:25:07.785 --rc genhtml_function_coverage=1 00:25:07.785 --rc genhtml_legend=1 00:25:07.785 --rc geninfo_all_blocks=1 00:25:07.785 --rc geninfo_unexecuted_blocks=1 00:25:07.785 00:25:07.785 ' 00:25:07.785 05:36:27 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:07.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.785 --rc genhtml_branch_coverage=1 00:25:07.785 --rc genhtml_function_coverage=1 00:25:07.785 --rc genhtml_legend=1 00:25:07.785 --rc geninfo_all_blocks=1 00:25:07.785 --rc geninfo_unexecuted_blocks=1 00:25:07.785 00:25:07.785 ' 00:25:07.785 05:36:27 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:07.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.785 --rc genhtml_branch_coverage=1 00:25:07.785 --rc genhtml_function_coverage=1 00:25:07.785 --rc genhtml_legend=1 00:25:07.786 --rc geninfo_all_blocks=1 00:25:07.786 --rc geninfo_unexecuted_blocks=1 00:25:07.786 00:25:07.786 ' 00:25:07.786 05:36:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:25:07.786 05:36:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:25:07.786 05:36:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:25:07.786 05:36:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:25:07.786 05:36:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:07.786 05:36:27 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:07.786 05:36:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:25:07.786 ************************************ 00:25:07.786 START TEST default_locks 00:25:07.786 ************************************ 00:25:07.786 05:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:25:07.786 05:36:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61046 00:25:07.786 05:36:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61046 00:25:07.786 05:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 61046 ']' 00:25:07.786 05:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.786 05:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:07.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.786 05:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.786 05:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:07.786 05:36:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:25:07.786 05:36:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:08.043 [2024-11-20 05:36:27.738017] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:08.043 [2024-11-20 05:36:27.738896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61046 ] 00:25:08.043 [2024-11-20 05:36:27.892433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.043 [2024-11-20 05:36:27.957047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.975 05:36:28 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:08.975 05:36:28 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:25:08.975 05:36:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61046 00:25:08.975 05:36:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61046 00:25:08.975 05:36:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:25:09.555 05:36:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61046 00:25:09.555 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 61046 ']' 00:25:09.555 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 61046 00:25:09.555 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:25:09.555 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:09.555 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61046 00:25:09.555 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:09.555 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:09.555 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61046' 00:25:09.555 killing process with pid 61046 00:25:09.555 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 61046 00:25:09.555 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 61046 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61046 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61046 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 61046 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 61046 ']' 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:25:10.123 ERROR: process (pid: 61046) is no longer running 00:25:10.123 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (61046) - No such process 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:25:10.123 00:25:10.123 real 0m2.167s 00:25:10.123 user 0m2.353s 00:25:10.123 sys 0m0.614s 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:10.123 05:36:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:25:10.123 ************************************ 00:25:10.123 END TEST default_locks 00:25:10.123 ************************************ 00:25:10.123 05:36:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:25:10.123 05:36:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:10.123 05:36:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:10.123 05:36:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:25:10.123 ************************************ 00:25:10.123 START TEST default_locks_via_rpc 00:25:10.123 ************************************ 00:25:10.124 05:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:25:10.124 05:36:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61116 00:25:10.124 05:36:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61116 00:25:10.124 05:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61116 ']' 00:25:10.124 05:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.124 05:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:10.124 05:36:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:10.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.124 05:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.124 05:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:10.124 05:36:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:10.124 [2024-11-20 05:36:29.965367] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:10.124 [2024-11-20 05:36:29.965496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61116 ] 00:25:10.383 [2024-11-20 05:36:30.114634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.383 [2024-11-20 05:36:30.198654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.319 05:36:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:11.319 05:36:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:25:11.319 05:36:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:25:11.319 05:36:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.319 05:36:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:11.319 05:36:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.319 05:36:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:25:11.319 05:36:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:25:11.319 05:36:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:25:11.319 05:36:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:25:11.319 05:36:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:25:11.319 05:36:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.319 05:36:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:11.319 05:36:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.319 05:36:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61116 00:25:11.319 05:36:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61116 00:25:11.319 05:36:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:25:11.886 05:36:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61116 00:25:11.886 05:36:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 61116 ']' 00:25:11.886 05:36:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 61116 00:25:11.886 05:36:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:25:11.886 05:36:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:11.886 05:36:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61116 00:25:11.886 05:36:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:11.886 killing process with pid 61116 00:25:11.886 05:36:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:11.886 05:36:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61116' 00:25:11.886 05:36:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 61116 00:25:11.886 05:36:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 61116 00:25:12.452 00:25:12.452 real 0m2.290s 00:25:12.452 user 0m2.376s 00:25:12.452 sys 0m0.803s 00:25:12.452 05:36:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:12.452 ************************************ 00:25:12.452 END TEST default_locks_via_rpc 00:25:12.452 ************************************ 00:25:12.452 05:36:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:12.452 05:36:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:25:12.452 05:36:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:12.453 05:36:32 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:12.453 05:36:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:25:12.453 ************************************ 00:25:12.453 START TEST non_locking_app_on_locked_coremask 00:25:12.453 ************************************ 00:25:12.453 05:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:25:12.453 05:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61185 00:25:12.453 05:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61185 /var/tmp/spdk.sock 00:25:12.453 05:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61185 ']' 00:25:12.453 05:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.453 05:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:12.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.453 05:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.453 05:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:12.453 05:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:12.453 05:36:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:12.453 [2024-11-20 05:36:32.314494] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:12.453 [2024-11-20 05:36:32.314602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61185 ] 00:25:12.711 [2024-11-20 05:36:32.462256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.711 [2024-11-20 05:36:32.544895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.646 05:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:13.646 05:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:25:13.646 05:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61213 00:25:13.646 05:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61213 /var/tmp/spdk2.sock 00:25:13.646 05:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61213 ']' 00:25:13.646 05:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:25:13.646 05:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:13.646 05:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:13.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:13.646 05:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:13.646 05:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:13.646 05:36:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:13.646 [2024-11-20 05:36:33.433093] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:13.646 [2024-11-20 05:36:33.433194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61213 ] 00:25:13.905 [2024-11-20 05:36:33.594591] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:25:13.905 [2024-11-20 05:36:33.594659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.905 [2024-11-20 05:36:33.767011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.840 05:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:14.840 05:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:25:14.840 05:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61185 00:25:14.840 05:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:25:14.840 05:36:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61185 00:25:15.779 05:36:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61185 00:25:15.779 05:36:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 61185 ']' 00:25:15.779 05:36:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 61185 00:25:15.779 05:36:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:25:15.779 05:36:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:15.779 05:36:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61185 00:25:15.779 05:36:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:15.779 killing process with pid 61185 00:25:15.779 05:36:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:15.779 05:36:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61185' 00:25:15.779 05:36:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 61185 00:25:15.779 05:36:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 61185 00:25:17.186 05:36:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61213 00:25:17.186 05:36:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 61213 ']' 00:25:17.186 05:36:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 61213 00:25:17.186 05:36:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:25:17.186 05:36:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:17.186 05:36:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61213 00:25:17.186 05:36:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:17.186 05:36:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:17.186 killing process with pid 61213 00:25:17.186 05:36:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61213' 00:25:17.186 05:36:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 61213 00:25:17.186 05:36:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 61213 00:25:17.444 00:25:17.444 real 0m5.104s 00:25:17.444 user 0m5.532s 00:25:17.444 sys 0m1.598s 00:25:17.444 05:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:17.444 05:36:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:17.444 ************************************ 00:25:17.444 END TEST non_locking_app_on_locked_coremask 00:25:17.444 ************************************ 00:25:17.703 05:36:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:25:17.703 05:36:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:17.703 05:36:37 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:17.703 05:36:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:25:17.703 ************************************ 00:25:17.703 START TEST locking_app_on_unlocked_coremask 00:25:17.703 ************************************ 00:25:17.703 05:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:25:17.703 05:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61303 00:25:17.703 05:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61303 /var/tmp/spdk.sock 00:25:17.703 05:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:25:17.703 05:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61303 ']' 00:25:17.703 05:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.703 05:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:17.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.703 05:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.703 05:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:17.703 05:36:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:17.703 [2024-11-20 05:36:37.482114] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:17.703 [2024-11-20 05:36:37.482229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61303 ] 00:25:17.961 [2024-11-20 05:36:37.634579] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:25:17.961 [2024-11-20 05:36:37.634646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.961 [2024-11-20 05:36:37.719633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.896 05:36:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:18.896 05:36:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:25:18.896 05:36:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61331 00:25:18.896 05:36:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:25:18.896 05:36:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61331 /var/tmp/spdk2.sock 00:25:18.896 05:36:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61331 ']' 00:25:18.896 05:36:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:18.896 05:36:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:18.896 05:36:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:18.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:18.896 05:36:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:18.896 05:36:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:18.896 [2024-11-20 05:36:38.572970] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:18.896 [2024-11-20 05:36:38.573090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61331 ] 00:25:18.896 [2024-11-20 05:36:38.737943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.155 [2024-11-20 05:36:38.850397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.722 05:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:19.722 05:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:25:19.722 05:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61331 00:25:19.722 05:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61331 00:25:19.722 05:36:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:25:21.114 05:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61303 00:25:21.114 05:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 61303 ']' 00:25:21.114 05:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 61303 00:25:21.114 05:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:25:21.114 05:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:21.114 05:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61303 00:25:21.114 05:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:21.114 05:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:21.114 killing process with pid 61303 00:25:21.114 05:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61303' 00:25:21.114 05:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 61303 00:25:21.114 05:36:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 61303 00:25:21.681 05:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61331 00:25:21.681 05:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 61331 ']' 00:25:21.681 05:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 61331 00:25:21.681 05:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:25:21.681 05:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:21.681 05:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61331 00:25:21.681 05:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:21.681 05:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:21.681 killing process with pid 61331 00:25:21.681 05:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61331' 00:25:21.681 05:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 61331 00:25:21.681 05:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 61331 00:25:21.939 00:25:21.939 real 0m4.272s 00:25:21.939 user 0m4.841s 00:25:21.939 sys 0m1.359s 00:25:21.939 05:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:21.939 05:36:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:21.939 ************************************ 00:25:21.939 END TEST locking_app_on_unlocked_coremask 00:25:21.939 ************************************ 00:25:21.939 05:36:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:25:21.939 05:36:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:21.939 05:36:41 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:21.939 05:36:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:25:21.939 ************************************ 00:25:21.939 START TEST locking_app_on_locked_coremask 00:25:21.939 ************************************ 00:25:21.939 05:36:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:25:21.939 05:36:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61412 00:25:21.939 05:36:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:21.939 05:36:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61412 /var/tmp/spdk.sock 00:25:21.939 05:36:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61412 ']' 00:25:21.939 05:36:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.939 05:36:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:21.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.939 05:36:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.939 05:36:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:21.939 05:36:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:21.939 [2024-11-20 05:36:41.799328] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:21.939 [2024-11-20 05:36:41.799406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61412 ] 00:25:22.197 [2024-11-20 05:36:41.944312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.197 [2024-11-20 05:36:42.000066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61432 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61432 /var/tmp/spdk2.sock 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61432 /var/tmp/spdk2.sock 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61432 /var/tmp/spdk2.sock 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 61432 ']' 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:22.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:22.456 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:22.456 [2024-11-20 05:36:42.299300] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:22.456 [2024-11-20 05:36:42.299408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61432 ] 00:25:22.715 [2024-11-20 05:36:42.457087] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61412 has claimed it. 00:25:22.715 [2024-11-20 05:36:42.457142] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:25:23.281 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (61432) - No such process 00:25:23.281 ERROR: process (pid: 61432) is no longer running 00:25:23.281 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:23.281 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:25:23.281 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:25:23.281 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:23.281 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:23.281 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:23.281 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61412 00:25:23.281 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61412 00:25:23.281 05:36:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:25:23.849 05:36:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61412 00:25:23.849 05:36:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 61412 ']' 00:25:23.849 05:36:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 61412 00:25:23.849 05:36:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:25:23.849 05:36:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:23.849 05:36:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61412 00:25:23.849 05:36:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:23.849 killing process with pid 61412 00:25:23.849 05:36:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:23.849 05:36:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61412' 00:25:23.849 05:36:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 61412 00:25:23.849 05:36:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 61412 00:25:24.117 00:25:24.117 real 0m2.120s 00:25:24.117 user 0m2.409s 00:25:24.117 sys 0m0.686s 00:25:24.117 ************************************ 00:25:24.117 END TEST locking_app_on_locked_coremask 00:25:24.117 ************************************ 00:25:24.117 05:36:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:24.117 05:36:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:24.117 05:36:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:25:24.117 05:36:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:24.117 05:36:43 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:24.117 05:36:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:25:24.117 ************************************ 00:25:24.117 START TEST locking_overlapped_coremask 00:25:24.117 ************************************ 00:25:24.117 05:36:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:25:24.117 05:36:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61482 00:25:24.117 05:36:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61482 /var/tmp/spdk.sock 00:25:24.117 05:36:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 61482 ']' 00:25:24.117 05:36:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.117 05:36:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:24.117 05:36:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:25:24.117 05:36:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.117 05:36:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:24.117 05:36:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:24.117 [2024-11-20 05:36:44.003843] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:24.117 [2024-11-20 05:36:44.003952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61482 ] 00:25:24.435 [2024-11-20 05:36:44.154281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:24.435 [2024-11-20 05:36:44.211772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.435 [2024-11-20 05:36:44.211892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.435 [2024-11-20 05:36:44.211892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61500 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61500 /var/tmp/spdk2.sock 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61500 /var/tmp/spdk2.sock 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61500 /var/tmp/spdk2.sock 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 61500 ']' 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:24.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:24.694 05:36:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:24.694 [2024-11-20 05:36:44.510480] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:24.694 [2024-11-20 05:36:44.510586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61500 ] 00:25:24.953 [2024-11-20 05:36:44.669400] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61482 has claimed it. 00:25:24.953 [2024-11-20 05:36:44.673513] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:25:25.520 ERROR: process (pid: 61500) is no longer running 00:25:25.520 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (61500) - No such process 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61482 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 61482 ']' 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 61482 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61482 00:25:25.520 killing process with pid 61482 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61482' 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 61482 00:25:25.520 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 61482 00:25:25.780 00:25:25.780 real 0m1.754s 00:25:25.780 user 0m4.815s 00:25:25.780 sys 0m0.428s 00:25:25.780 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:25.780 ************************************ 00:25:25.780 END TEST locking_overlapped_coremask 00:25:25.780 05:36:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:25.780 ************************************ 00:25:26.038 05:36:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:25:26.038 05:36:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:26.038 05:36:45 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:26.038 05:36:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:25:26.038 ************************************ 00:25:26.038 START TEST locking_overlapped_coremask_via_rpc 00:25:26.038 ************************************ 00:25:26.038 05:36:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:25:26.038 05:36:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61546 00:25:26.038 05:36:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61546 /var/tmp/spdk.sock 00:25:26.038 05:36:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61546 ']' 00:25:26.038 05:36:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.038 05:36:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:26.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.038 05:36:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.038 05:36:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:25:26.038 05:36:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:26.038 05:36:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:26.038 [2024-11-20 05:36:45.802246] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:26.038 [2024-11-20 05:36:45.802380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61546 ] 00:25:26.038 [2024-11-20 05:36:45.955068] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:25:26.038 [2024-11-20 05:36:45.955138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:26.297 [2024-11-20 05:36:46.024009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.297 [2024-11-20 05:36:46.024083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.297 [2024-11-20 05:36:46.024082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:26.555 05:36:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:26.555 05:36:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:25:26.555 05:36:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:25:26.555 05:36:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61568 00:25:26.555 05:36:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61568 /var/tmp/spdk2.sock 00:25:26.555 05:36:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61568 ']' 00:25:26.555 05:36:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:26.555 05:36:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:26.555 05:36:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:26.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:26.555 05:36:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:26.555 05:36:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:26.556 [2024-11-20 05:36:46.316788] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:26.556 [2024-11-20 05:36:46.316878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61568 ] 00:25:26.814 [2024-11-20 05:36:46.475547] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:25:26.814 [2024-11-20 05:36:46.475611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:26.814 [2024-11-20 05:36:46.589540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:26.814 [2024-11-20 05:36:46.593661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:26.814 [2024-11-20 05:36:46.593661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:27.749 [2024-11-20 05:36:47.402577] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61546 has claimed it. 00:25:27.749 2024/11/20 05:36:47 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:25:27.749 request: 00:25:27.749 { 00:25:27.749 "method": "framework_enable_cpumask_locks", 00:25:27.749 "params": {} 00:25:27.749 } 00:25:27.749 Got JSON-RPC error response 00:25:27.749 GoRPCClient: error on JSON-RPC call 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61546 /var/tmp/spdk.sock 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61546 ']' 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:27.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:27.749 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:28.007 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:28.007 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:25:28.007 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61568 /var/tmp/spdk2.sock 00:25:28.007 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 61568 ']' 00:25:28.007 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:28.007 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:28.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:28.007 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:28.007 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:28.007 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:28.320 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:28.320 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:25:28.320 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:25:28.320 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:25:28.320 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:25:28.320 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:25:28.320 00:25:28.320 real 0m2.229s 00:25:28.320 user 0m1.236s 00:25:28.320 sys 0m0.231s 00:25:28.321 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:28.321 ************************************ 00:25:28.321 END TEST locking_overlapped_coremask_via_rpc 00:25:28.321 ************************************ 00:25:28.321 05:36:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:28.321 05:36:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:25:28.321 05:36:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61546 ]] 00:25:28.321 05:36:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61546 00:25:28.321 05:36:47 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61546 ']' 00:25:28.321 05:36:47 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61546 00:25:28.321 05:36:48 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:25:28.321 05:36:48 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:28.321 05:36:48 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61546 00:25:28.321 05:36:48 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:28.321 killing process with pid 61546 00:25:28.321 05:36:48 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:28.321 05:36:48 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61546' 00:25:28.321 05:36:48 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 61546 00:25:28.321 05:36:48 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 61546 00:25:28.578 05:36:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61568 ]] 00:25:28.578 05:36:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61568 00:25:28.579 05:36:48 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61568 ']' 00:25:28.579 05:36:48 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61568 00:25:28.579 05:36:48 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:25:28.579 05:36:48 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:28.579 05:36:48 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61568 00:25:28.579 05:36:48 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:28.579 05:36:48 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:28.579 05:36:48 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61568' 00:25:28.579 killing process with pid 61568 00:25:28.579 05:36:48 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 61568 00:25:28.579 05:36:48 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 61568 00:25:28.837 05:36:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:25:28.837 05:36:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:25:28.837 05:36:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61546 ]] 00:25:28.837 05:36:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61546 00:25:28.837 05:36:48 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61546 ']' 00:25:28.837 05:36:48 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61546 00:25:28.837 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (61546) - No such process 00:25:28.837 Process with pid 61546 is not found 00:25:28.837 05:36:48 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 61546 is not found' 00:25:28.837 05:36:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61568 ]] 00:25:28.837 05:36:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61568 00:25:28.837 05:36:48 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 61568 ']' 00:25:28.837 05:36:48 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 61568 00:25:28.837 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (61568) - No such process 00:25:28.837 Process with pid 61568 is not found 00:25:28.837 05:36:48 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 61568 is not found' 00:25:28.837 05:36:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:25:28.837 00:25:28.837 real 0m21.310s 00:25:28.837 user 0m35.025s 00:25:28.837 sys 0m6.624s 00:25:28.837 05:36:48 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:28.837 05:36:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:25:28.837 ************************************ 00:25:28.837 END TEST cpu_locks 00:25:28.837 ************************************ 00:25:29.096 00:25:29.096 real 0m50.427s 00:25:29.096 user 1m36.234s 00:25:29.096 sys 0m11.338s 00:25:29.096 05:36:48 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:29.096 05:36:48 event -- common/autotest_common.sh@10 -- # set +x 00:25:29.096 ************************************ 00:25:29.096 END TEST event 00:25:29.096 ************************************ 00:25:29.096 05:36:48 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:25:29.096 05:36:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:29.096 05:36:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:29.096 05:36:48 -- common/autotest_common.sh@10 -- # set +x 00:25:29.096 ************************************ 00:25:29.096 START TEST thread 00:25:29.096 ************************************ 00:25:29.096 05:36:48 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:25:29.096 * Looking for test storage... 00:25:29.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:25:29.096 05:36:48 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:29.096 05:36:48 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:29.096 05:36:48 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:25:29.355 05:36:49 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:29.355 05:36:49 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:29.355 05:36:49 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:29.356 05:36:49 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:29.356 05:36:49 thread -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.356 05:36:49 thread -- scripts/common.sh@336 -- # read -ra ver1 00:25:29.356 05:36:49 thread -- scripts/common.sh@337 -- # IFS=.-: 00:25:29.356 05:36:49 thread -- scripts/common.sh@337 -- # read -ra ver2 00:25:29.356 05:36:49 thread -- scripts/common.sh@338 -- # local 'op=<' 00:25:29.356 05:36:49 thread -- scripts/common.sh@340 -- # ver1_l=2 00:25:29.356 05:36:49 thread -- scripts/common.sh@341 -- # ver2_l=1 00:25:29.356 05:36:49 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:29.356 05:36:49 thread -- scripts/common.sh@344 -- # case "$op" in 00:25:29.356 05:36:49 thread -- scripts/common.sh@345 -- # : 1 00:25:29.356 05:36:49 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:29.356 05:36:49 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.356 05:36:49 thread -- scripts/common.sh@365 -- # decimal 1 00:25:29.356 05:36:49 thread -- scripts/common.sh@353 -- # local d=1 00:25:29.356 05:36:49 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.356 05:36:49 thread -- scripts/common.sh@355 -- # echo 1 00:25:29.356 05:36:49 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:25:29.356 05:36:49 thread -- scripts/common.sh@366 -- # decimal 2 00:25:29.356 05:36:49 thread -- scripts/common.sh@353 -- # local d=2 00:25:29.356 05:36:49 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:29.356 05:36:49 thread -- scripts/common.sh@355 -- # echo 2 00:25:29.356 05:36:49 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:25:29.356 05:36:49 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:29.356 05:36:49 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:29.356 05:36:49 thread -- scripts/common.sh@368 -- # return 0 00:25:29.356 05:36:49 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:29.356 05:36:49 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:29.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.356 --rc genhtml_branch_coverage=1 00:25:29.356 --rc genhtml_function_coverage=1 00:25:29.356 --rc genhtml_legend=1 00:25:29.356 --rc geninfo_all_blocks=1 00:25:29.356 --rc geninfo_unexecuted_blocks=1 00:25:29.356 00:25:29.356 ' 00:25:29.356 05:36:49 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:29.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.356 --rc genhtml_branch_coverage=1 00:25:29.356 --rc genhtml_function_coverage=1 00:25:29.356 --rc genhtml_legend=1 00:25:29.356 --rc geninfo_all_blocks=1 00:25:29.356 --rc geninfo_unexecuted_blocks=1 00:25:29.356 00:25:29.356 ' 00:25:29.356 05:36:49 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:29.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.356 --rc genhtml_branch_coverage=1 00:25:29.356 --rc genhtml_function_coverage=1 00:25:29.356 --rc genhtml_legend=1 00:25:29.356 --rc geninfo_all_blocks=1 00:25:29.356 --rc geninfo_unexecuted_blocks=1 00:25:29.356 00:25:29.356 ' 00:25:29.356 05:36:49 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:29.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.356 --rc genhtml_branch_coverage=1 00:25:29.356 --rc genhtml_function_coverage=1 00:25:29.356 --rc genhtml_legend=1 00:25:29.356 --rc geninfo_all_blocks=1 00:25:29.356 --rc geninfo_unexecuted_blocks=1 00:25:29.356 00:25:29.356 ' 00:25:29.356 05:36:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:25:29.356 05:36:49 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:25:29.356 05:36:49 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:29.356 05:36:49 thread -- common/autotest_common.sh@10 -- # set +x 00:25:29.356 ************************************ 00:25:29.356 START TEST thread_poller_perf 00:25:29.356 ************************************ 00:25:29.356 05:36:49 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:25:29.356 [2024-11-20 05:36:49.086870] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:29.356 [2024-11-20 05:36:49.087137] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61717 ] 00:25:29.356 [2024-11-20 05:36:49.237241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.615 [2024-11-20 05:36:49.301570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.615 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:25:30.551 [2024-11-20T05:36:50.470Z] ====================================== 00:25:30.551 [2024-11-20T05:36:50.470Z] busy:2112425792 (cyc) 00:25:30.551 [2024-11-20T05:36:50.470Z] total_run_count: 329000 00:25:30.551 [2024-11-20T05:36:50.470Z] tsc_hz: 2100000000 (cyc) 00:25:30.551 [2024-11-20T05:36:50.470Z] ====================================== 00:25:30.551 [2024-11-20T05:36:50.470Z] poller_cost: 6420 (cyc), 3057 (nsec) 00:25:30.551 00:25:30.551 real 0m1.285s 00:25:30.551 ************************************ 00:25:30.551 END TEST thread_poller_perf 00:25:30.551 ************************************ 00:25:30.551 user 0m1.133s 00:25:30.551 sys 0m0.044s 00:25:30.551 05:36:50 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:30.551 05:36:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:25:30.551 05:36:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:25:30.551 05:36:50 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:25:30.551 05:36:50 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:30.551 05:36:50 thread -- common/autotest_common.sh@10 -- # set +x 00:25:30.551 ************************************ 00:25:30.551 START TEST thread_poller_perf 00:25:30.551 ************************************ 00:25:30.551 05:36:50 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:25:30.551 [2024-11-20 05:36:50.425267] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:30.551 [2024-11-20 05:36:50.425373] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61758 ] 00:25:30.811 [2024-11-20 05:36:50.575575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.811 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:25:30.811 [2024-11-20 05:36:50.630746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.188 [2024-11-20T05:36:52.107Z] ====================================== 00:25:32.188 [2024-11-20T05:36:52.107Z] busy:2102035566 (cyc) 00:25:32.188 [2024-11-20T05:36:52.107Z] total_run_count: 5042000 00:25:32.188 [2024-11-20T05:36:52.107Z] tsc_hz: 2100000000 (cyc) 00:25:32.188 [2024-11-20T05:36:52.107Z] ====================================== 00:25:32.188 [2024-11-20T05:36:52.107Z] poller_cost: 416 (cyc), 198 (nsec) 00:25:32.188 ************************************ 00:25:32.188 END TEST thread_poller_perf 00:25:32.188 ************************************ 00:25:32.188 00:25:32.188 real 0m1.271s 00:25:32.188 user 0m1.116s 00:25:32.188 sys 0m0.049s 00:25:32.188 05:36:51 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:32.188 05:36:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:25:32.188 05:36:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:25:32.188 00:25:32.188 real 0m2.880s 00:25:32.188 user 0m2.413s 00:25:32.188 sys 0m0.258s 00:25:32.188 ************************************ 00:25:32.188 END TEST thread 00:25:32.188 ************************************ 00:25:32.188 05:36:51 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:32.188 05:36:51 thread -- common/autotest_common.sh@10 -- # set +x 00:25:32.188 05:36:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:25:32.188 05:36:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:25:32.188 05:36:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:32.188 05:36:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:32.188 05:36:51 -- common/autotest_common.sh@10 -- # set +x 00:25:32.188 ************************************ 00:25:32.188 START TEST app_cmdline 00:25:32.188 ************************************ 00:25:32.188 05:36:51 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:25:32.188 * Looking for test storage... 00:25:32.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:25:32.188 05:36:51 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:32.188 05:36:51 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:25:32.188 05:36:51 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:32.188 05:36:51 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@345 -- # : 1 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:25:32.188 05:36:51 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:25:32.189 05:36:51 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:32.189 05:36:51 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:25:32.189 05:36:51 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:25:32.189 05:36:51 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:32.189 05:36:51 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:32.189 05:36:51 app_cmdline -- scripts/common.sh@368 -- # return 0 00:25:32.189 05:36:51 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:32.189 05:36:51 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:32.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.189 --rc genhtml_branch_coverage=1 00:25:32.189 --rc genhtml_function_coverage=1 00:25:32.189 --rc genhtml_legend=1 00:25:32.189 --rc geninfo_all_blocks=1 00:25:32.189 --rc geninfo_unexecuted_blocks=1 00:25:32.189 00:25:32.189 ' 00:25:32.189 05:36:51 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:32.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.189 --rc genhtml_branch_coverage=1 00:25:32.189 --rc genhtml_function_coverage=1 00:25:32.189 --rc genhtml_legend=1 00:25:32.189 --rc geninfo_all_blocks=1 00:25:32.189 --rc geninfo_unexecuted_blocks=1 00:25:32.189 00:25:32.189 ' 00:25:32.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.189 05:36:51 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:32.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.189 --rc genhtml_branch_coverage=1 00:25:32.189 --rc genhtml_function_coverage=1 00:25:32.189 --rc genhtml_legend=1 00:25:32.189 --rc geninfo_all_blocks=1 00:25:32.189 --rc geninfo_unexecuted_blocks=1 00:25:32.189 00:25:32.189 ' 00:25:32.189 05:36:51 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:32.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.189 --rc genhtml_branch_coverage=1 00:25:32.189 --rc genhtml_function_coverage=1 00:25:32.189 --rc genhtml_legend=1 00:25:32.189 --rc geninfo_all_blocks=1 00:25:32.189 --rc geninfo_unexecuted_blocks=1 00:25:32.189 00:25:32.189 ' 00:25:32.189 05:36:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:25:32.189 05:36:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61837 00:25:32.189 05:36:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61837 00:25:32.189 05:36:51 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 61837 ']' 00:25:32.189 05:36:51 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.189 05:36:51 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:25:32.189 05:36:51 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:32.189 05:36:51 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.189 05:36:51 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:32.189 05:36:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:25:32.189 [2024-11-20 05:36:52.059081] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:32.189 [2024-11-20 05:36:52.059724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61837 ] 00:25:32.447 [2024-11-20 05:36:52.220055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.447 [2024-11-20 05:36:52.284476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.384 05:36:53 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:33.384 05:36:53 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:25:33.384 05:36:53 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:25:33.643 { 00:25:33.643 "fields": { 00:25:33.643 "commit": "57b682926", 00:25:33.643 "major": 25, 00:25:33.643 "minor": 1, 00:25:33.643 "patch": 0, 00:25:33.643 "suffix": "-pre" 00:25:33.643 }, 00:25:33.643 "version": "SPDK v25.01-pre git sha1 57b682926" 00:25:33.643 } 00:25:33.643 05:36:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:25:33.643 05:36:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:25:33.643 05:36:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:25:33.643 05:36:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:25:33.643 05:36:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:25:33.643 05:36:53 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.643 05:36:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:25:33.643 05:36:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:25:33.643 05:36:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:25:33.643 05:36:53 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.643 05:36:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:25:33.643 05:36:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:25:33.643 05:36:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:25:33.643 05:36:53 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:25:33.643 05:36:53 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:25:33.643 05:36:53 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:33.643 05:36:53 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.643 05:36:53 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:33.643 05:36:53 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.643 05:36:53 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:33.643 05:36:53 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.643 05:36:53 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:33.643 05:36:53 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:33.643 05:36:53 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:25:33.903 2024/11/20 05:36:53 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:25:33.903 request: 00:25:33.903 { 00:25:33.903 "method": "env_dpdk_get_mem_stats", 00:25:33.903 "params": {} 00:25:33.903 } 00:25:33.903 Got JSON-RPC error response 00:25:33.903 GoRPCClient: error on JSON-RPC call 00:25:33.903 05:36:53 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:25:33.903 05:36:53 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:33.903 05:36:53 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:33.903 05:36:53 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:33.903 05:36:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61837 00:25:33.903 05:36:53 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 61837 ']' 00:25:33.903 05:36:53 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 61837 00:25:33.903 05:36:53 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:25:33.903 05:36:53 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:33.903 05:36:53 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61837 00:25:33.903 05:36:53 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:33.903 05:36:53 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:33.903 killing process with pid 61837 00:25:33.903 05:36:53 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61837' 00:25:33.903 05:36:53 app_cmdline -- common/autotest_common.sh@971 -- # kill 61837 00:25:33.903 05:36:53 app_cmdline -- common/autotest_common.sh@976 -- # wait 61837 00:25:34.470 00:25:34.470 real 0m2.337s 00:25:34.470 user 0m2.946s 00:25:34.470 sys 0m0.576s 00:25:34.470 05:36:54 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:34.470 ************************************ 00:25:34.470 END TEST app_cmdline 00:25:34.470 ************************************ 00:25:34.470 05:36:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:25:34.470 05:36:54 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:25:34.470 05:36:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:34.470 05:36:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:34.470 05:36:54 -- common/autotest_common.sh@10 -- # set +x 00:25:34.470 ************************************ 00:25:34.470 START TEST version 00:25:34.470 ************************************ 00:25:34.470 05:36:54 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:25:34.470 * Looking for test storage... 00:25:34.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:25:34.470 05:36:54 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:34.470 05:36:54 version -- common/autotest_common.sh@1691 -- # lcov --version 00:25:34.470 05:36:54 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:34.470 05:36:54 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:34.470 05:36:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:34.470 05:36:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:34.470 05:36:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:34.470 05:36:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:25:34.470 05:36:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:25:34.470 05:36:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:25:34.470 05:36:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:25:34.470 05:36:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:25:34.470 05:36:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:25:34.470 05:36:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:25:34.470 05:36:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:34.470 05:36:54 version -- scripts/common.sh@344 -- # case "$op" in 00:25:34.470 05:36:54 version -- scripts/common.sh@345 -- # : 1 00:25:34.470 05:36:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:34.470 05:36:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.470 05:36:54 version -- scripts/common.sh@365 -- # decimal 1 00:25:34.470 05:36:54 version -- scripts/common.sh@353 -- # local d=1 00:25:34.470 05:36:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:34.470 05:36:54 version -- scripts/common.sh@355 -- # echo 1 00:25:34.470 05:36:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:25:34.470 05:36:54 version -- scripts/common.sh@366 -- # decimal 2 00:25:34.470 05:36:54 version -- scripts/common.sh@353 -- # local d=2 00:25:34.470 05:36:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:34.470 05:36:54 version -- scripts/common.sh@355 -- # echo 2 00:25:34.729 05:36:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:25:34.729 05:36:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:34.729 05:36:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:34.729 05:36:54 version -- scripts/common.sh@368 -- # return 0 00:25:34.729 05:36:54 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:34.729 05:36:54 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:34.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.729 --rc genhtml_branch_coverage=1 00:25:34.729 --rc genhtml_function_coverage=1 00:25:34.729 --rc genhtml_legend=1 00:25:34.729 --rc geninfo_all_blocks=1 00:25:34.729 --rc geninfo_unexecuted_blocks=1 00:25:34.729 00:25:34.729 ' 00:25:34.729 05:36:54 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:34.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.729 --rc genhtml_branch_coverage=1 00:25:34.729 --rc genhtml_function_coverage=1 00:25:34.729 --rc genhtml_legend=1 00:25:34.729 --rc geninfo_all_blocks=1 00:25:34.729 --rc geninfo_unexecuted_blocks=1 00:25:34.729 00:25:34.729 ' 00:25:34.729 05:36:54 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:34.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.729 --rc genhtml_branch_coverage=1 00:25:34.729 --rc genhtml_function_coverage=1 00:25:34.729 --rc genhtml_legend=1 00:25:34.729 --rc geninfo_all_blocks=1 00:25:34.729 --rc geninfo_unexecuted_blocks=1 00:25:34.729 00:25:34.729 ' 00:25:34.729 05:36:54 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:34.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.729 --rc genhtml_branch_coverage=1 00:25:34.729 --rc genhtml_function_coverage=1 00:25:34.729 --rc genhtml_legend=1 00:25:34.729 --rc geninfo_all_blocks=1 00:25:34.729 --rc geninfo_unexecuted_blocks=1 00:25:34.729 00:25:34.729 ' 00:25:34.729 05:36:54 version -- app/version.sh@17 -- # get_header_version major 00:25:34.729 05:36:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:34.729 05:36:54 version -- app/version.sh@14 -- # cut -f2 00:25:34.729 05:36:54 version -- app/version.sh@14 -- # tr -d '"' 00:25:34.729 05:36:54 version -- app/version.sh@17 -- # major=25 00:25:34.729 05:36:54 version -- app/version.sh@18 -- # get_header_version minor 00:25:34.729 05:36:54 version -- app/version.sh@14 -- # cut -f2 00:25:34.729 05:36:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:34.729 05:36:54 version -- app/version.sh@14 -- # tr -d '"' 00:25:34.729 05:36:54 version -- app/version.sh@18 -- # minor=1 00:25:34.729 05:36:54 version -- app/version.sh@19 -- # get_header_version patch 00:25:34.729 05:36:54 version -- app/version.sh@14 -- # cut -f2 00:25:34.729 05:36:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:34.729 05:36:54 version -- app/version.sh@14 -- # tr -d '"' 00:25:34.729 05:36:54 version -- app/version.sh@19 -- # patch=0 00:25:34.729 05:36:54 version -- app/version.sh@20 -- # get_header_version suffix 00:25:34.729 05:36:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:34.729 05:36:54 version -- app/version.sh@14 -- # cut -f2 00:25:34.729 05:36:54 version -- app/version.sh@14 -- # tr -d '"' 00:25:34.729 05:36:54 version -- app/version.sh@20 -- # suffix=-pre 00:25:34.729 05:36:54 version -- app/version.sh@22 -- # version=25.1 00:25:34.729 05:36:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:25:34.729 05:36:54 version -- app/version.sh@28 -- # version=25.1rc0 00:25:34.729 05:36:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:34.729 05:36:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:25:34.729 05:36:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:25:34.729 05:36:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:25:34.729 00:25:34.729 real 0m0.299s 00:25:34.729 user 0m0.186s 00:25:34.729 sys 0m0.151s 00:25:34.730 05:36:54 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:34.730 05:36:54 version -- common/autotest_common.sh@10 -- # set +x 00:25:34.730 ************************************ 00:25:34.730 END TEST version 00:25:34.730 ************************************ 00:25:34.730 05:36:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:25:34.730 05:36:54 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:25:34.730 05:36:54 -- spdk/autotest.sh@194 -- # uname -s 00:25:34.730 05:36:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:25:34.730 05:36:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:25:34.730 05:36:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:25:34.730 05:36:54 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:25:34.730 05:36:54 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:25:34.730 05:36:54 -- spdk/autotest.sh@256 -- # timing_exit lib 00:25:34.730 05:36:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:34.730 05:36:54 -- common/autotest_common.sh@10 -- # set +x 00:25:34.730 05:36:54 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:25:34.730 05:36:54 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:25:34.730 05:36:54 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:25:34.730 05:36:54 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:25:34.730 05:36:54 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:25:34.730 05:36:54 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:25:34.730 05:36:54 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:25:34.730 05:36:54 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:34.730 05:36:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:34.730 05:36:54 -- common/autotest_common.sh@10 -- # set +x 00:25:34.730 ************************************ 00:25:34.730 START TEST nvmf_tcp 00:25:34.730 ************************************ 00:25:34.730 05:36:54 nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:25:34.989 * Looking for test storage... 00:25:34.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:25:34.989 05:36:54 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:34.989 05:36:54 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:25:34.989 05:36:54 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:34.989 05:36:54 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:34.989 05:36:54 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:25:34.989 05:36:54 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:34.989 05:36:54 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:34.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.989 --rc genhtml_branch_coverage=1 00:25:34.989 --rc genhtml_function_coverage=1 00:25:34.989 --rc genhtml_legend=1 00:25:34.989 --rc geninfo_all_blocks=1 00:25:34.989 --rc geninfo_unexecuted_blocks=1 00:25:34.989 00:25:34.989 ' 00:25:34.989 05:36:54 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:34.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.989 --rc genhtml_branch_coverage=1 00:25:34.989 --rc genhtml_function_coverage=1 00:25:34.989 --rc genhtml_legend=1 00:25:34.989 --rc geninfo_all_blocks=1 00:25:34.989 --rc geninfo_unexecuted_blocks=1 00:25:34.989 00:25:34.989 ' 00:25:34.989 05:36:54 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:34.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.989 --rc genhtml_branch_coverage=1 00:25:34.989 --rc genhtml_function_coverage=1 00:25:34.989 --rc genhtml_legend=1 00:25:34.989 --rc geninfo_all_blocks=1 00:25:34.989 --rc geninfo_unexecuted_blocks=1 00:25:34.989 00:25:34.989 ' 00:25:34.989 05:36:54 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:34.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.989 --rc genhtml_branch_coverage=1 00:25:34.989 --rc genhtml_function_coverage=1 00:25:34.989 --rc genhtml_legend=1 00:25:34.989 --rc geninfo_all_blocks=1 00:25:34.989 --rc geninfo_unexecuted_blocks=1 00:25:34.989 00:25:34.989 ' 00:25:34.989 05:36:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:25:34.989 05:36:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:25:34.989 05:36:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:25:34.989 05:36:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:34.989 05:36:54 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:34.989 05:36:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:34.989 ************************************ 00:25:34.989 START TEST nvmf_target_core 00:25:34.989 ************************************ 00:25:34.989 05:36:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:25:34.989 * Looking for test storage... 00:25:34.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:25:34.989 05:36:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:25:35.249 05:36:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:35.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.249 --rc genhtml_branch_coverage=1 00:25:35.249 --rc genhtml_function_coverage=1 00:25:35.249 --rc genhtml_legend=1 00:25:35.249 --rc geninfo_all_blocks=1 00:25:35.249 --rc geninfo_unexecuted_blocks=1 00:25:35.249 00:25:35.249 ' 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:35.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.249 --rc genhtml_branch_coverage=1 00:25:35.249 --rc genhtml_function_coverage=1 00:25:35.249 --rc genhtml_legend=1 00:25:35.249 --rc geninfo_all_blocks=1 00:25:35.249 --rc geninfo_unexecuted_blocks=1 00:25:35.249 00:25:35.249 ' 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:35.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.249 --rc genhtml_branch_coverage=1 00:25:35.249 --rc genhtml_function_coverage=1 00:25:35.249 --rc genhtml_legend=1 00:25:35.249 --rc geninfo_all_blocks=1 00:25:35.249 --rc geninfo_unexecuted_blocks=1 00:25:35.249 00:25:35.249 ' 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:35.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.249 --rc genhtml_branch_coverage=1 00:25:35.249 --rc genhtml_function_coverage=1 00:25:35.249 --rc genhtml_legend=1 00:25:35.249 --rc geninfo_all_blocks=1 00:25:35.249 --rc geninfo_unexecuted_blocks=1 00:25:35.249 00:25:35.249 ' 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.249 05:36:55 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:35.250 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:25:35.250 ************************************ 00:25:35.250 START TEST nvmf_abort 00:25:35.250 ************************************ 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:25:35.250 * Looking for test storage... 00:25:35.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:25:35.250 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:35.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.510 --rc genhtml_branch_coverage=1 00:25:35.510 --rc genhtml_function_coverage=1 00:25:35.510 --rc genhtml_legend=1 00:25:35.510 --rc geninfo_all_blocks=1 00:25:35.510 --rc geninfo_unexecuted_blocks=1 00:25:35.510 00:25:35.510 ' 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:35.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.510 --rc genhtml_branch_coverage=1 00:25:35.510 --rc genhtml_function_coverage=1 00:25:35.510 --rc genhtml_legend=1 00:25:35.510 --rc geninfo_all_blocks=1 00:25:35.510 --rc geninfo_unexecuted_blocks=1 00:25:35.510 00:25:35.510 ' 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:35.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.510 --rc genhtml_branch_coverage=1 00:25:35.510 --rc genhtml_function_coverage=1 00:25:35.510 --rc genhtml_legend=1 00:25:35.510 --rc geninfo_all_blocks=1 00:25:35.510 --rc geninfo_unexecuted_blocks=1 00:25:35.510 00:25:35.510 ' 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:35.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.510 --rc genhtml_branch_coverage=1 00:25:35.510 --rc genhtml_function_coverage=1 00:25:35.510 --rc genhtml_legend=1 00:25:35.510 --rc geninfo_all_blocks=1 00:25:35.510 --rc geninfo_unexecuted_blocks=1 00:25:35.510 00:25:35.510 ' 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:35.510 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:35.511 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:35.511 Cannot find device "nvmf_init_br" 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:35.511 Cannot find device "nvmf_init_br2" 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:35.511 Cannot find device "nvmf_tgt_br" 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:35.511 Cannot find device "nvmf_tgt_br2" 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:35.511 Cannot find device "nvmf_init_br" 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:35.511 Cannot find device "nvmf_init_br2" 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:35.511 Cannot find device "nvmf_tgt_br" 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:35.511 Cannot find device "nvmf_tgt_br2" 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:35.511 Cannot find device "nvmf_br" 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:35.511 Cannot find device "nvmf_init_if" 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:35.511 Cannot find device "nvmf_init_if2" 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:35.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:35.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:35.511 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:35.771 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:36.030 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:36.030 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:25:36.030 00:25:36.030 --- 10.0.0.3 ping statistics --- 00:25:36.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.030 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:36.030 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:36.030 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.093 ms 00:25:36.030 00:25:36.030 --- 10.0.0.4 ping statistics --- 00:25:36.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.030 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:36.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:25:36.030 00:25:36.030 --- 10.0.0.1 ping statistics --- 00:25:36.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.030 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:36.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:25:36.030 00:25:36.030 --- 10.0.0.2 ping statistics --- 00:25:36.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.030 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=62288 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 62288 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 62288 ']' 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:36.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:36.030 05:36:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:36.030 [2024-11-20 05:36:55.867793] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:36.030 [2024-11-20 05:36:55.867907] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.291 [2024-11-20 05:36:56.025718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:36.291 [2024-11-20 05:36:56.103757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.291 [2024-11-20 05:36:56.103841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.291 [2024-11-20 05:36:56.103863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.291 [2024-11-20 05:36:56.103880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.291 [2024-11-20 05:36:56.103896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.291 [2024-11-20 05:36:56.105229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.291 [2024-11-20 05:36:56.105380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:36.291 [2024-11-20 05:36:56.105381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.229 05:36:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:37.229 05:36:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:25:37.229 05:36:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:37.229 05:36:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.229 05:36:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:37.229 05:36:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.229 05:36:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:25:37.229 05:36:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.229 05:36:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:37.229 [2024-11-20 05:36:57.009337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.229 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.229 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:25:37.229 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.229 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:37.229 Malloc0 00:25:37.229 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.229 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:37.229 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.229 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:37.229 Delay0 00:25:37.229 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.229 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:25:37.229 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.229 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:37.229 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.229 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:25:37.229 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.230 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:37.230 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.230 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:37.230 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.230 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:37.230 [2024-11-20 05:36:57.085165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:37.230 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.230 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:37.230 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.230 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:37.230 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.230 05:36:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:25:37.487 [2024-11-20 05:36:57.297254] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:25:40.018 Initializing NVMe Controllers 00:25:40.018 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:25:40.018 controller IO queue size 128 less than required 00:25:40.018 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:25:40.018 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:25:40.018 Initialization complete. Launching workers. 00:25:40.018 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28248 00:25:40.018 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28309, failed to submit 62 00:25:40.018 success 28252, unsuccessful 57, failed 0 00:25:40.018 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:40.018 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.018 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:40.018 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.018 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:40.018 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:25:40.018 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:40.018 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:25:40.018 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:40.018 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:25:40.018 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:40.018 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:40.018 rmmod nvme_tcp 00:25:40.018 rmmod nvme_fabrics 00:25:40.019 rmmod nvme_keyring 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 62288 ']' 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 62288 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 62288 ']' 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 62288 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62288 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:40.019 killing process with pid 62288 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62288' 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 62288 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 62288 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:25:40.019 00:25:40.019 real 0m4.869s 00:25:40.019 user 0m12.812s 00:25:40.019 sys 0m1.366s 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:40.019 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:40.019 ************************************ 00:25:40.019 END TEST nvmf_abort 00:25:40.019 ************************************ 00:25:40.277 05:36:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:25:40.277 05:36:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:40.277 05:36:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:40.277 05:36:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:25:40.277 ************************************ 00:25:40.277 START TEST nvmf_ns_hotplug_stress 00:25:40.277 ************************************ 00:25:40.277 05:36:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:25:40.277 * Looking for test storage... 00:25:40.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:40.277 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:40.277 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:25:40.277 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:40.277 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:40.277 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:40.277 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:40.277 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:40.277 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.277 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:25:40.277 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:40.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.278 --rc genhtml_branch_coverage=1 00:25:40.278 --rc genhtml_function_coverage=1 00:25:40.278 --rc genhtml_legend=1 00:25:40.278 --rc geninfo_all_blocks=1 00:25:40.278 --rc geninfo_unexecuted_blocks=1 00:25:40.278 00:25:40.278 ' 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:40.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.278 --rc genhtml_branch_coverage=1 00:25:40.278 --rc genhtml_function_coverage=1 00:25:40.278 --rc genhtml_legend=1 00:25:40.278 --rc geninfo_all_blocks=1 00:25:40.278 --rc geninfo_unexecuted_blocks=1 00:25:40.278 00:25:40.278 ' 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:40.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.278 --rc genhtml_branch_coverage=1 00:25:40.278 --rc genhtml_function_coverage=1 00:25:40.278 --rc genhtml_legend=1 00:25:40.278 --rc geninfo_all_blocks=1 00:25:40.278 --rc geninfo_unexecuted_blocks=1 00:25:40.278 00:25:40.278 ' 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:40.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.278 --rc genhtml_branch_coverage=1 00:25:40.278 --rc genhtml_function_coverage=1 00:25:40.278 --rc genhtml_legend=1 00:25:40.278 --rc geninfo_all_blocks=1 00:25:40.278 --rc geninfo_unexecuted_blocks=1 00:25:40.278 00:25:40.278 ' 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:40.278 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:40.278 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:40.279 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.279 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.279 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:40.537 Cannot find device "nvmf_init_br" 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:40.537 Cannot find device "nvmf_init_br2" 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:40.537 Cannot find device "nvmf_tgt_br" 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:40.537 Cannot find device "nvmf_tgt_br2" 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:40.537 Cannot find device "nvmf_init_br" 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:40.537 Cannot find device "nvmf_init_br2" 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:40.537 Cannot find device "nvmf_tgt_br" 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:40.537 Cannot find device "nvmf_tgt_br2" 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:40.537 Cannot find device "nvmf_br" 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:40.537 Cannot find device "nvmf_init_if" 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:40.537 Cannot find device "nvmf_init_if2" 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:40.537 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:40.537 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:40.537 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:40.795 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:40.795 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:40.795 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:25:40.795 00:25:40.796 --- 10.0.0.3 ping statistics --- 00:25:40.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.796 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:25:40.796 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:40.796 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:40.796 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:25:40.796 00:25:40.796 --- 10.0.0.4 ping statistics --- 00:25:40.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.796 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:25:40.796 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:40.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:25:40.796 00:25:40.796 --- 10.0.0.1 ping statistics --- 00:25:40.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.796 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:25:40.796 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:40.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:25:40.796 00:25:40.796 --- 10.0.0.2 ping statistics --- 00:25:40.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.796 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:25:40.796 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.796 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:25:40.796 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:40.796 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.796 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:40.796 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:40.796 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.796 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:40.796 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:41.055 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:25:41.055 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:41.055 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:41.055 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:41.055 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=62609 00:25:41.055 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:41.055 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 62609 00:25:41.055 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 62609 ']' 00:25:41.055 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.055 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:41.055 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.055 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:41.055 05:37:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:41.055 [2024-11-20 05:37:00.781798] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:41.055 [2024-11-20 05:37:00.781928] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.055 [2024-11-20 05:37:00.941713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:41.313 [2024-11-20 05:37:01.004949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.313 [2024-11-20 05:37:01.005026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.313 [2024-11-20 05:37:01.005042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.313 [2024-11-20 05:37:01.005056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.313 [2024-11-20 05:37:01.005067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.313 [2024-11-20 05:37:01.006141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.313 [2024-11-20 05:37:01.006234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:41.313 [2024-11-20 05:37:01.006237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.313 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:41.313 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:25:41.313 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:41.313 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:41.313 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:41.313 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.313 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:25:41.313 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:41.571 [2024-11-20 05:37:01.458768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.829 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:42.087 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:42.087 [2024-11-20 05:37:01.971373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:42.087 05:37:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:42.346 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:25:42.604 Malloc0 00:25:42.605 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:42.863 Delay0 00:25:42.863 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:43.121 05:37:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:25:43.381 NULL1 00:25:43.381 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:25:43.653 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:25:43.653 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62722 00:25:43.653 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:25:43.653 05:37:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:45.027 Read completed with error (sct=0, sc=11) 00:25:45.027 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:45.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:45.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:45.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:45.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:45.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:45.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:45.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:45.027 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:25:45.027 05:37:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:25:45.285 true 00:25:45.285 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:25:45.285 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:46.219 05:37:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:46.478 05:37:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:25:46.478 05:37:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:25:46.735 true 00:25:46.735 05:37:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:25:46.735 05:37:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:46.993 05:37:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:47.252 05:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:25:47.252 05:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:25:47.511 true 00:25:47.511 05:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:25:47.511 05:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:47.769 05:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:48.027 05:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:25:48.027 05:37:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:25:48.301 true 00:25:48.301 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:25:48.301 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:49.236 05:37:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:49.495 05:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:25:49.495 05:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:25:49.754 true 00:25:49.754 05:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:25:49.754 05:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:50.013 05:37:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:50.269 05:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:25:50.269 05:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:25:50.527 true 00:25:50.527 05:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:25:50.527 05:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:50.786 05:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:51.045 05:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:25:51.045 05:37:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:25:51.304 true 00:25:51.304 05:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:25:51.304 05:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:52.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:52.239 05:37:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:52.498 05:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:25:52.498 05:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:25:52.756 true 00:25:52.756 05:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:25:52.756 05:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:53.014 05:37:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:53.273 05:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:25:53.273 05:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:25:53.531 true 00:25:53.531 05:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:25:53.532 05:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:53.790 05:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:53.790 05:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:25:53.790 05:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:25:54.050 true 00:25:54.050 05:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:25:54.050 05:37:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:55.427 05:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:55.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:55.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:55.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:55.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:55.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:55.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:55.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:55.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:55.427 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:25:55.427 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:25:55.994 true 00:25:55.994 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:25:55.994 05:37:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:56.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:56.562 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:56.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:56.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:56.562 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:56.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:56.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:56.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:56.820 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:25:56.821 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:25:57.081 true 00:25:57.081 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:25:57.081 05:37:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:58.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:58.032 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:58.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:58.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:58.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:58.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:58.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:58.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:58.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:58.032 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:25:58.032 05:37:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:25:58.599 true 00:25:58.599 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:25:58.599 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:59.167 05:37:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:59.425 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:25:59.425 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:25:59.684 true 00:25:59.684 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:25:59.684 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:59.943 05:37:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:00.201 05:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:26:00.201 05:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:26:00.461 true 00:26:00.461 05:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:26:00.461 05:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:00.720 05:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:01.289 05:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:26:01.289 05:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:26:01.289 true 00:26:01.289 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:26:01.289 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:02.225 05:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:02.484 05:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:26:02.484 05:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:26:02.743 true 00:26:02.743 05:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:26:02.743 05:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:03.310 05:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:03.310 05:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:26:03.310 05:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:26:03.569 true 00:26:03.569 05:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:26:03.569 05:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:03.828 05:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:04.087 05:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:26:04.087 05:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:26:04.347 true 00:26:04.347 05:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:26:04.347 05:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:05.284 05:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:05.543 05:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:26:05.543 05:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:26:05.802 true 00:26:05.802 05:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:26:05.802 05:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:06.061 05:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:06.320 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:26:06.320 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:26:06.579 true 00:26:06.579 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:26:06.579 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:06.837 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:07.095 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:26:07.095 05:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:26:07.095 true 00:26:07.353 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:26:07.353 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:08.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:08.289 05:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:08.548 05:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:26:08.548 05:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:26:08.806 true 00:26:08.806 05:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:26:08.806 05:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:09.064 05:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:09.322 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:26:09.322 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:26:09.615 true 00:26:09.615 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:26:09.615 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:09.874 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:10.134 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:26:10.134 05:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:26:10.392 true 00:26:10.392 05:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:26:10.392 05:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:11.326 05:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:11.622 05:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:26:11.622 05:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:26:11.880 true 00:26:11.880 05:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:26:11.880 05:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:12.139 05:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:12.478 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:26:12.478 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:26:12.737 true 00:26:12.737 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:26:12.737 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:12.996 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:12.996 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:26:12.996 05:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:26:13.254 true 00:26:13.254 05:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:26:13.255 05:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:14.191 Initializing NVMe Controllers 00:26:14.191 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:14.191 Controller IO queue size 128, less than required. 00:26:14.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:14.191 Controller IO queue size 128, less than required. 00:26:14.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:14.191 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:14.191 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:14.191 Initialization complete. Launching workers. 00:26:14.191 ======================================================== 00:26:14.191 Latency(us) 00:26:14.191 Device Information : IOPS MiB/s Average min max 00:26:14.191 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1129.50 0.55 56071.38 2859.32 1017664.64 00:26:14.191 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10718.28 5.23 11941.45 2700.56 585752.05 00:26:14.191 ======================================================== 00:26:14.191 Total : 11847.78 5.79 16148.54 2700.56 1017664.64 00:26:14.191 00:26:14.191 05:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:14.450 05:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:26:14.450 05:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:26:14.709 true 00:26:14.709 05:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62722 00:26:14.709 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62722) - No such process 00:26:14.709 05:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62722 00:26:14.709 05:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:14.968 05:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:15.227 05:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:26:15.227 05:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:26:15.227 05:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:26:15.227 05:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:15.227 05:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:26:15.486 null0 00:26:15.486 05:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:15.486 05:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:15.486 05:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:26:15.745 null1 00:26:15.745 05:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:15.745 05:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:15.745 05:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:26:16.004 null2 00:26:16.004 05:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:16.004 05:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:16.004 05:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:26:16.263 null3 00:26:16.263 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:16.263 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:16.263 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:26:16.522 null4 00:26:16.522 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:16.522 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:16.522 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:26:16.781 null5 00:26:16.781 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:16.781 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:16.781 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:26:17.040 null6 00:26:17.040 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:17.040 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:17.040 05:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:26:17.304 null7 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 63766 63767 63769 63771 63772 63774 63777 63778 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.304 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:17.571 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:17.571 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:17.571 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:17.571 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:17.571 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:17.571 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:17.571 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:17.571 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:17.835 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:18.095 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:18.095 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:18.095 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:18.095 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:18.095 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:18.095 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:18.095 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:18.095 05:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.354 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:18.614 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:18.614 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:18.614 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:18.614 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:18.614 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:18.614 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:18.614 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:18.614 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:18.874 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:19.134 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:19.134 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:19.134 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:19.134 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:19.134 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:19.134 05:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:19.134 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:19.393 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:19.651 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:19.651 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:19.651 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:19.651 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:19.651 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:19.651 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:19.651 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:19.651 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:19.651 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:19.651 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:19.651 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:19.909 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:19.909 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:19.909 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:19.909 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:19.909 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:19.909 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:19.909 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:19.909 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:19.909 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:19.909 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:19.909 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:19.909 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:19.909 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:19.909 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:19.910 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:19.910 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:19.910 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:19.910 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:20.168 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:20.168 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:20.168 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:20.168 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:20.168 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:20.168 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:20.168 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:20.168 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:20.168 05:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:20.168 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:20.168 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:20.168 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:20.428 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:20.688 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:20.688 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:20.688 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:20.688 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:20.688 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:20.688 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:20.688 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:20.688 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:20.688 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:20.688 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:20.688 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:20.688 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:20.688 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:20.948 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:21.207 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:21.207 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:21.207 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:21.207 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:21.207 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:21.207 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:21.208 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:21.208 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:21.208 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:21.208 05:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:21.208 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:21.208 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:21.208 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:21.467 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:21.789 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.068 05:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:22.328 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.328 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.328 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:22.328 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:22.328 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.328 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.328 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:22.328 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:22.328 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:22.328 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:22.328 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:22.586 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:22.586 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:22.586 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.586 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.586 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.586 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.586 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:22.586 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.586 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.586 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.586 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.586 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.586 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.586 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.586 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.845 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.845 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.845 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:22.845 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:22.845 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:22.845 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:26:22.845 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:22.845 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:26:22.845 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:22.845 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:26:22.845 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:22.845 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:22.845 rmmod nvme_tcp 00:26:22.845 rmmod nvme_fabrics 00:26:22.845 rmmod nvme_keyring 00:26:22.845 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:22.845 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:26:22.846 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:26:22.846 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 62609 ']' 00:26:22.846 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 62609 00:26:22.846 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 62609 ']' 00:26:22.846 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 62609 00:26:22.846 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:26:22.846 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:22.846 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62609 00:26:22.846 killing process with pid 62609 00:26:22.846 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:22.846 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:22.846 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62609' 00:26:22.846 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 62609 00:26:22.846 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 62609 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:23.105 05:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:26:23.364 00:26:23.364 real 0m43.169s 00:26:23.364 user 3m23.754s 00:26:23.364 sys 0m16.700s 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:23.364 ************************************ 00:26:23.364 END TEST nvmf_ns_hotplug_stress 00:26:23.364 ************************************ 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:26:23.364 ************************************ 00:26:23.364 START TEST nvmf_delete_subsystem 00:26:23.364 ************************************ 00:26:23.364 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:26:23.364 * Looking for test storage... 00:26:23.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:23.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.624 --rc genhtml_branch_coverage=1 00:26:23.624 --rc genhtml_function_coverage=1 00:26:23.624 --rc genhtml_legend=1 00:26:23.624 --rc geninfo_all_blocks=1 00:26:23.624 --rc geninfo_unexecuted_blocks=1 00:26:23.624 00:26:23.624 ' 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:23.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.624 --rc genhtml_branch_coverage=1 00:26:23.624 --rc genhtml_function_coverage=1 00:26:23.624 --rc genhtml_legend=1 00:26:23.624 --rc geninfo_all_blocks=1 00:26:23.624 --rc geninfo_unexecuted_blocks=1 00:26:23.624 00:26:23.624 ' 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:23.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.624 --rc genhtml_branch_coverage=1 00:26:23.624 --rc genhtml_function_coverage=1 00:26:23.624 --rc genhtml_legend=1 00:26:23.624 --rc geninfo_all_blocks=1 00:26:23.624 --rc geninfo_unexecuted_blocks=1 00:26:23.624 00:26:23.624 ' 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:23.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.624 --rc genhtml_branch_coverage=1 00:26:23.624 --rc genhtml_function_coverage=1 00:26:23.624 --rc genhtml_legend=1 00:26:23.624 --rc geninfo_all_blocks=1 00:26:23.624 --rc geninfo_unexecuted_blocks=1 00:26:23.624 00:26:23.624 ' 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.624 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:23.625 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:23.625 Cannot find device "nvmf_init_br" 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:23.625 Cannot find device "nvmf_init_br2" 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:23.625 Cannot find device "nvmf_tgt_br" 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:23.625 Cannot find device "nvmf_tgt_br2" 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:23.625 Cannot find device "nvmf_init_br" 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:23.625 Cannot find device "nvmf_init_br2" 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:23.625 Cannot find device "nvmf_tgt_br" 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:23.625 Cannot find device "nvmf_tgt_br2" 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:23.625 Cannot find device "nvmf_br" 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:23.625 Cannot find device "nvmf_init_if" 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:26:23.625 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:23.885 Cannot find device "nvmf_init_if2" 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:23.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:23.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:23.885 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:24.144 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:24.144 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:26:24.144 00:26:24.144 --- 10.0.0.3 ping statistics --- 00:26:24.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.144 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:24.144 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:24.144 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:26:24.144 00:26:24.144 --- 10.0.0.4 ping statistics --- 00:26:24.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.144 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:24.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:26:24.144 00:26:24.144 --- 10.0.0.1 ping statistics --- 00:26:24.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.144 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:24.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:26:24.144 00:26:24.144 --- 10.0.0.2 ping statistics --- 00:26:24.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.144 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=65182 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 65182 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 65182 ']' 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:24.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.144 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:24.145 05:37:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:24.145 [2024-11-20 05:37:43.931806] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:24.145 [2024-11-20 05:37:43.931903] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.403 [2024-11-20 05:37:44.090779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:24.403 [2024-11-20 05:37:44.152648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.403 [2024-11-20 05:37:44.152703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.404 [2024-11-20 05:37:44.152719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.404 [2024-11-20 05:37:44.152732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.404 [2024-11-20 05:37:44.152743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.404 [2024-11-20 05:37:44.153767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.404 [2024-11-20 05:37:44.153783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.341 05:37:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:25.341 05:37:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:26:25.341 05:37:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:25.341 05:37:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:25.341 05:37:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:25.341 05:37:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.341 05:37:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:25.341 05:37:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.341 05:37:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:25.341 [2024-11-20 05:37:45.008349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:25.341 [2024-11-20 05:37:45.024474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:25.341 NULL1 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:25.341 Delay0 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=65238 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:25.341 05:37:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:26:25.341 [2024-11-20 05:37:45.239185] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:27.246 05:37:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:27.246 05:37:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.246 05:37:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 [2024-11-20 05:37:47.275115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2aa50 is same with the state(6) to be set 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Write completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.505 starting I/O failed: -6 00:26:27.505 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 starting I/O failed: -6 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 starting I/O failed: -6 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 starting I/O failed: -6 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 starting I/O failed: -6 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 starting I/O failed: -6 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 [2024-11-20 05:37:47.275900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb138000c40 is same with the state(6) to be set 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:27.506 Write completed with error (sct=0, sc=8) 00:26:27.506 Read completed with error (sct=0, sc=8) 00:26:28.444 [2024-11-20 05:37:48.253175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b26ee0 is same with the state(6) to be set 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Write completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Write completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Write completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Write completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Write completed with error (sct=0, sc=8) 00:26:28.444 Write completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 [2024-11-20 05:37:48.273610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2dea0 is same with the state(6) to be set 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Write completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.444 Write completed with error (sct=0, sc=8) 00:26:28.444 Read completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 [2024-11-20 05:37:48.274353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2b7e0 is same with the state(6) to be set 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 [2024-11-20 05:37:48.274993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb13800d020 is same with the state(6) to be set 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Write completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 Read completed with error (sct=0, sc=8) 00:26:28.445 [2024-11-20 05:37:48.275184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb13800d680 is same with the state(6) to be set 00:26:28.445 Initializing NVMe Controllers 00:26:28.445 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:28.445 Controller IO queue size 128, less than required. 00:26:28.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:28.445 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:28.445 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:28.445 Initialization complete. Launching workers. 00:26:28.445 ======================================================== 00:26:28.445 Latency(us) 00:26:28.445 Device Information : IOPS MiB/s Average min max 00:26:28.445 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.85 0.08 892290.04 999.50 1011055.32 00:26:28.445 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.92 0.08 934963.25 386.52 2001772.76 00:26:28.445 ======================================================== 00:26:28.445 Total : 333.77 0.16 912991.63 386.52 2001772.76 00:26:28.445 00:26:28.445 [2024-11-20 05:37:48.276059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b26ee0 (9): Bad file descriptor 00:26:28.445 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:28.445 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.445 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:26:28.445 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65238 00:26:28.445 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65238 00:26:29.014 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (65238) - No such process 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 65238 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 65238 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 65238 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:29.014 [2024-11-20 05:37:48.801665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=65284 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65284 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:29.014 05:37:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:29.273 [2024-11-20 05:37:48.993843] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:29.532 05:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:29.532 05:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65284 00:26:29.532 05:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:30.099 05:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:30.099 05:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65284 00:26:30.099 05:37:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:30.667 05:37:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:30.667 05:37:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65284 00:26:30.667 05:37:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:30.925 05:37:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:30.925 05:37:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65284 00:26:30.925 05:37:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:31.492 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:31.493 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65284 00:26:31.493 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:32.059 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:32.059 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65284 00:26:32.059 05:37:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:32.317 Initializing NVMe Controllers 00:26:32.317 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:32.317 Controller IO queue size 128, less than required. 00:26:32.317 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:32.317 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:32.317 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:32.317 Initialization complete. Launching workers. 00:26:32.317 ======================================================== 00:26:32.317 Latency(us) 00:26:32.317 Device Information : IOPS MiB/s Average min max 00:26:32.317 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002820.32 1000105.78 1009678.63 00:26:32.317 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004268.61 1000137.13 1041262.55 00:26:32.317 ======================================================== 00:26:32.317 Total : 256.00 0.12 1003544.46 1000105.78 1041262.55 00:26:32.317 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65284 00:26:32.577 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (65284) - No such process 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 65284 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:32.577 rmmod nvme_tcp 00:26:32.577 rmmod nvme_fabrics 00:26:32.577 rmmod nvme_keyring 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 65182 ']' 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 65182 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 65182 ']' 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 65182 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65182 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:32.577 killing process with pid 65182 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65182' 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 65182 00:26:32.577 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 65182 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:32.846 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:26:33.105 00:26:33.105 real 0m9.732s 00:26:33.105 user 0m28.054s 00:26:33.105 sys 0m2.665s 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:33.105 ************************************ 00:26:33.105 END TEST nvmf_delete_subsystem 00:26:33.105 ************************************ 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:26:33.105 ************************************ 00:26:33.105 START TEST nvmf_host_management 00:26:33.105 ************************************ 00:26:33.105 05:37:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:26:33.365 * Looking for test storage... 00:26:33.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:33.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.365 --rc genhtml_branch_coverage=1 00:26:33.365 --rc genhtml_function_coverage=1 00:26:33.365 --rc genhtml_legend=1 00:26:33.365 --rc geninfo_all_blocks=1 00:26:33.365 --rc geninfo_unexecuted_blocks=1 00:26:33.365 00:26:33.365 ' 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:33.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.365 --rc genhtml_branch_coverage=1 00:26:33.365 --rc genhtml_function_coverage=1 00:26:33.365 --rc genhtml_legend=1 00:26:33.365 --rc geninfo_all_blocks=1 00:26:33.365 --rc geninfo_unexecuted_blocks=1 00:26:33.365 00:26:33.365 ' 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:33.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.365 --rc genhtml_branch_coverage=1 00:26:33.365 --rc genhtml_function_coverage=1 00:26:33.365 --rc genhtml_legend=1 00:26:33.365 --rc geninfo_all_blocks=1 00:26:33.365 --rc geninfo_unexecuted_blocks=1 00:26:33.365 00:26:33.365 ' 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:33.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.365 --rc genhtml_branch_coverage=1 00:26:33.365 --rc genhtml_function_coverage=1 00:26:33.365 --rc genhtml_legend=1 00:26:33.365 --rc geninfo_all_blocks=1 00:26:33.365 --rc geninfo_unexecuted_blocks=1 00:26:33.365 00:26:33.365 ' 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.365 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:33.366 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:33.366 Cannot find device "nvmf_init_br" 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:33.366 Cannot find device "nvmf_init_br2" 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:33.366 Cannot find device "nvmf_tgt_br" 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:26:33.366 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:33.366 Cannot find device "nvmf_tgt_br2" 00:26:33.367 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:26:33.367 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:33.367 Cannot find device "nvmf_init_br" 00:26:33.367 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:26:33.367 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:33.626 Cannot find device "nvmf_init_br2" 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:33.626 Cannot find device "nvmf_tgt_br" 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:33.626 Cannot find device "nvmf_tgt_br2" 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:33.626 Cannot find device "nvmf_br" 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:33.626 Cannot find device "nvmf_init_if" 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:33.626 Cannot find device "nvmf_init_if2" 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:33.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:33.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:33.626 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:33.886 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:33.886 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:26:33.886 00:26:33.886 --- 10.0.0.3 ping statistics --- 00:26:33.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.886 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:33.886 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:33.886 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:26:33.886 00:26:33.886 --- 10.0.0.4 ping statistics --- 00:26:33.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.886 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:33.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:26:33.886 00:26:33.886 --- 10.0.0.1 ping statistics --- 00:26:33.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.886 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:33.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:26:33.886 00:26:33.886 --- 10.0.0.2 ping statistics --- 00:26:33.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.886 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=65575 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 65575 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 65575 ']' 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:33.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:33.886 05:37:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:33.886 [2024-11-20 05:37:53.732079] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:33.886 [2024-11-20 05:37:53.732177] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.146 [2024-11-20 05:37:53.880760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:34.146 [2024-11-20 05:37:53.935662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.146 [2024-11-20 05:37:53.935710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.146 [2024-11-20 05:37:53.935720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.146 [2024-11-20 05:37:53.935729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.146 [2024-11-20 05:37:53.935737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.146 [2024-11-20 05:37:53.936747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:34.146 [2024-11-20 05:37:53.936882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:34.146 [2024-11-20 05:37:53.936956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.146 [2024-11-20 05:37:53.936955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:34.146 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:34.146 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:26:34.146 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:34.146 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:34.146 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:34.405 [2024-11-20 05:37:54.101273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:34.405 Malloc0 00:26:34.405 [2024-11-20 05:37:54.180397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65634 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65634 /var/tmp/bdevperf.sock 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 65634 ']' 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:34.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:34.405 { 00:26:34.405 "params": { 00:26:34.405 "name": "Nvme$subsystem", 00:26:34.405 "trtype": "$TEST_TRANSPORT", 00:26:34.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.405 "adrfam": "ipv4", 00:26:34.405 "trsvcid": "$NVMF_PORT", 00:26:34.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.405 "hdgst": ${hdgst:-false}, 00:26:34.405 "ddgst": ${ddgst:-false} 00:26:34.405 }, 00:26:34.405 "method": "bdev_nvme_attach_controller" 00:26:34.405 } 00:26:34.405 EOF 00:26:34.405 )") 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:26:34.405 05:37:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:34.405 "params": { 00:26:34.405 "name": "Nvme0", 00:26:34.405 "trtype": "tcp", 00:26:34.405 "traddr": "10.0.0.3", 00:26:34.405 "adrfam": "ipv4", 00:26:34.405 "trsvcid": "4420", 00:26:34.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:34.405 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:34.405 "hdgst": false, 00:26:34.405 "ddgst": false 00:26:34.405 }, 00:26:34.405 "method": "bdev_nvme_attach_controller" 00:26:34.405 }' 00:26:34.405 [2024-11-20 05:37:54.296723] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:34.405 [2024-11-20 05:37:54.296824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65634 ] 00:26:34.664 [2024-11-20 05:37:54.453117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.664 [2024-11-20 05:37:54.519039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.923 Running I/O for 10 seconds... 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1155 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1155 -ge 100 ']' 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.494 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:35.494 [2024-11-20 05:37:55.399670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.494 [2024-11-20 05:37:55.399711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.494 [2024-11-20 05:37:55.399726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.494 [2024-11-20 05:37:55.399737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.494 [2024-11-20 05:37:55.399749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.494 [2024-11-20 05:37:55.399760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.494 [2024-11-20 05:37:55.399771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.494 [2024-11-20 05:37:55.399781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.494 [2024-11-20 05:37:55.399791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166e660 is same with the state(6) to be set 00:26:35.494 [2024-11-20 05:37:55.400765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.494 [2024-11-20 05:37:55.400790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.494 [2024-11-20 05:37:55.400811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.494 [2024-11-20 05:37:55.400822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.494 [2024-11-20 05:37:55.400835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.494 [2024-11-20 05:37:55.400846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.494 [2024-11-20 05:37:55.400859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.494 [2024-11-20 05:37:55.400870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.494 [2024-11-20 05:37:55.400882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.494 [2024-11-20 05:37:55.400892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.494 [2024-11-20 05:37:55.400905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.494 [2024-11-20 05:37:55.400916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.494 [2024-11-20 05:37:55.400928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.494 [2024-11-20 05:37:55.400938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.494 [2024-11-20 05:37:55.400951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.494 [2024-11-20 05:37:55.400961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.494 [2024-11-20 05:37:55.400974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.494 [2024-11-20 05:37:55.400984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.494 [2024-11-20 05:37:55.400997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.494 [2024-11-20 05:37:55.401014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.494 [2024-11-20 05:37:55.401027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.495 [2024-11-20 05:37:55.401880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.495 [2024-11-20 05:37:55.401914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.495 [2024-11-20 05:37:55.401924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 [2024-11-20 05:37:55.401936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.496 [2024-11-20 05:37:55.401947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 [2024-11-20 05:37:55.401960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.496 [2024-11-20 05:37:55.401970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:26:35.496 [2024-11-20 05:37:55.401982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.496 [2024-11-20 05:37:55.401992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 [2024-11-20 05:37:55.402004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.496 [2024-11-20 05:37:55.402015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 [2024-11-20 05:37:55.402027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.496 [2024-11-20 05:37:55.402037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 [2024-11-20 05:37:55.402048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.496 [2024-11-20 05:37:55.402059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.496 [2024-11-20 05:37:55.402071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.496 [2024-11-20 05:37:55.402081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 [2024-11-20 05:37:55.402093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.496 [2024-11-20 05:37:55.402103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 [2024-11-20 05:37:55.402115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.496 [2024-11-20 05:37:55.402125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 [2024-11-20 05:37:55.402137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.496 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:35.496 [2024-11-20 05:37:55.402147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 [2024-11-20 05:37:55.402159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.496 [2024-11-20 05:37:55.402169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 [2024-11-20 05:37:55.402181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.496 [2024-11-20 05:37:55.402191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 [2024-11-20 05:37:55.402204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.496 [2024-11-20 05:37:55.402215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 [2024-11-20 05:37:55.402227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.496 [2024-11-20 05:37:55.402237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 [2024-11-20 05:37:55.402249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.496 [2024-11-20 05:37:55.402259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.496 [2024-11-20 05:37:55.402270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166e370 is same with the state(6) to be set 00:26:35.496 [2024-11-20 05:37:55.403477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:35.496 task offset: 32640 on job bdev=Nvme0n1 fails 00:26:35.496 00:26:35.496 Latency(us) 00:26:35.496 [2024-11-20T05:37:55.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.496 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:35.496 Job: Nvme0n1 ended in about 0.70 seconds with error 00:26:35.496 Verification LBA range: start 0x0 length 0x400 00:26:35.496 Nvme0n1 : 0.70 1745.08 109.07 91.85 0.00 33923.94 5710.99 36700.16 00:26:35.496 [2024-11-20T05:37:55.415Z] =================================================================================================================== 00:26:35.496 [2024-11-20T05:37:55.415Z] Total : 1745.08 109.07 91.85 0.00 33923.94 5710.99 36700.16 00:26:35.496 [2024-11-20 05:37:55.405645] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:35.496 [2024-11-20 05:37:55.405670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166e660 (9): Bad file descriptor 00:26:35.496 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.496 05:37:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:26:35.755 [2024-11-20 05:37:55.412662] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:26:36.701 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65634 00:26:36.701 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65634) - No such process 00:26:36.701 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:26:36.701 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:26:36.701 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:36.701 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:26:36.701 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:26:36.701 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:26:36.701 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:36.701 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:36.701 { 00:26:36.701 "params": { 00:26:36.701 "name": "Nvme$subsystem", 00:26:36.701 "trtype": "$TEST_TRANSPORT", 00:26:36.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.701 "adrfam": "ipv4", 00:26:36.701 "trsvcid": "$NVMF_PORT", 00:26:36.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.701 "hdgst": ${hdgst:-false}, 00:26:36.701 "ddgst": ${ddgst:-false} 00:26:36.701 }, 00:26:36.701 "method": "bdev_nvme_attach_controller" 00:26:36.701 } 00:26:36.701 EOF 00:26:36.701 )") 00:26:36.701 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:26:36.701 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:26:36.701 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:26:36.701 05:37:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:36.701 "params": { 00:26:36.701 "name": "Nvme0", 00:26:36.701 "trtype": "tcp", 00:26:36.701 "traddr": "10.0.0.3", 00:26:36.701 "adrfam": "ipv4", 00:26:36.701 "trsvcid": "4420", 00:26:36.701 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:36.701 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:36.701 "hdgst": false, 00:26:36.701 "ddgst": false 00:26:36.701 }, 00:26:36.701 "method": "bdev_nvme_attach_controller" 00:26:36.701 }' 00:26:36.701 [2024-11-20 05:37:56.480046] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:36.701 [2024-11-20 05:37:56.480148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65684 ] 00:26:36.959 [2024-11-20 05:37:56.630662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.959 [2024-11-20 05:37:56.686438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.959 Running I/O for 1 seconds... 00:26:38.335 1664.00 IOPS, 104.00 MiB/s 00:26:38.335 Latency(us) 00:26:38.335 [2024-11-20T05:37:58.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.335 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:38.335 Verification LBA range: start 0x0 length 0x400 00:26:38.335 Nvme0n1 : 1.01 1706.66 106.67 0.00 0.00 36911.66 4681.14 31831.77 00:26:38.335 [2024-11-20T05:37:58.254Z] =================================================================================================================== 00:26:38.335 [2024-11-20T05:37:58.254Z] Total : 1706.66 106.67 0.00 0.00 36911.66 4681.14 31831.77 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:38.335 rmmod nvme_tcp 00:26:38.335 rmmod nvme_fabrics 00:26:38.335 rmmod nvme_keyring 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 65575 ']' 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 65575 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 65575 ']' 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 65575 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65575 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:38.335 killing process with pid 65575 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65575' 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 65575 00:26:38.335 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 65575 00:26:38.594 [2024-11-20 05:37:58.348990] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:38.594 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:38.853 00:26:38.853 real 0m5.665s 00:26:38.853 user 0m20.332s 00:26:38.853 sys 0m1.677s 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:38.853 ************************************ 00:26:38.853 END TEST nvmf_host_management 00:26:38.853 ************************************ 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:26:38.853 ************************************ 00:26:38.853 START TEST nvmf_lvol 00:26:38.853 ************************************ 00:26:38.853 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:26:39.113 * Looking for test storage... 00:26:39.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:39.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.113 --rc genhtml_branch_coverage=1 00:26:39.113 --rc genhtml_function_coverage=1 00:26:39.113 --rc genhtml_legend=1 00:26:39.113 --rc geninfo_all_blocks=1 00:26:39.113 --rc geninfo_unexecuted_blocks=1 00:26:39.113 00:26:39.113 ' 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:39.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.113 --rc genhtml_branch_coverage=1 00:26:39.113 --rc genhtml_function_coverage=1 00:26:39.113 --rc genhtml_legend=1 00:26:39.113 --rc geninfo_all_blocks=1 00:26:39.113 --rc geninfo_unexecuted_blocks=1 00:26:39.113 00:26:39.113 ' 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:39.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.113 --rc genhtml_branch_coverage=1 00:26:39.113 --rc genhtml_function_coverage=1 00:26:39.113 --rc genhtml_legend=1 00:26:39.113 --rc geninfo_all_blocks=1 00:26:39.113 --rc geninfo_unexecuted_blocks=1 00:26:39.113 00:26:39.113 ' 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:39.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.113 --rc genhtml_branch_coverage=1 00:26:39.113 --rc genhtml_function_coverage=1 00:26:39.113 --rc genhtml_legend=1 00:26:39.113 --rc geninfo_all_blocks=1 00:26:39.113 --rc geninfo_unexecuted_blocks=1 00:26:39.113 00:26:39.113 ' 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.113 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:39.114 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:39.114 Cannot find device "nvmf_init_br" 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:39.114 Cannot find device "nvmf_init_br2" 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:39.114 Cannot find device "nvmf_tgt_br" 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:39.114 Cannot find device "nvmf_tgt_br2" 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:39.114 Cannot find device "nvmf_init_br" 00:26:39.114 05:37:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:26:39.114 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:39.114 Cannot find device "nvmf_init_br2" 00:26:39.114 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:26:39.114 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:39.114 Cannot find device "nvmf_tgt_br" 00:26:39.114 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:26:39.114 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:39.377 Cannot find device "nvmf_tgt_br2" 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:39.377 Cannot find device "nvmf_br" 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:39.377 Cannot find device "nvmf_init_if" 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:39.377 Cannot find device "nvmf_init_if2" 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:39.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:39.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:39.377 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:39.636 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:39.636 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:39.637 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:39.637 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:26:39.637 00:26:39.637 --- 10.0.0.3 ping statistics --- 00:26:39.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.637 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:39.637 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:39.637 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:26:39.637 00:26:39.637 --- 10.0.0.4 ping statistics --- 00:26:39.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.637 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:39.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:26:39.637 00:26:39.637 --- 10.0.0.1 ping statistics --- 00:26:39.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.637 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:39.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:26:39.637 00:26:39.637 --- 10.0.0.2 ping statistics --- 00:26:39.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.637 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:39.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65949 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65949 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 65949 ']' 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:39.637 05:37:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:39.637 [2024-11-20 05:37:59.452177] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:39.637 [2024-11-20 05:37:59.452273] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.896 [2024-11-20 05:37:59.610963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:39.896 [2024-11-20 05:37:59.675281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.896 [2024-11-20 05:37:59.675350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.896 [2024-11-20 05:37:59.675366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.896 [2024-11-20 05:37:59.675379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.896 [2024-11-20 05:37:59.675390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.896 [2024-11-20 05:37:59.676477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.896 [2024-11-20 05:37:59.676977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.896 [2024-11-20 05:37:59.676980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.463 05:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:40.463 05:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:26:40.463 05:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:40.463 05:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:40.463 05:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:40.721 05:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.721 05:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:40.721 [2024-11-20 05:38:00.588546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.721 05:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:40.980 05:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:26:40.980 05:38:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:41.546 05:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:26:41.546 05:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:26:41.546 05:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:26:42.112 05:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=94d4e6c0-3ab4-45a8-bf6a-8ef03179bfde 00:26:42.112 05:38:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 94d4e6c0-3ab4-45a8-bf6a-8ef03179bfde lvol 20 00:26:42.112 05:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e227f838-d732-4727-8004-fd49a9a9c04d 00:26:42.112 05:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:42.370 05:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e227f838-d732-4727-8004-fd49a9a9c04d 00:26:42.937 05:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:42.937 [2024-11-20 05:38:02.853178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:43.196 05:38:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:43.196 05:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=66101 00:26:43.196 05:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:26:43.196 05:38:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:26:44.589 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot e227f838-d732-4727-8004-fd49a9a9c04d MY_SNAPSHOT 00:26:44.589 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5c6bb708-e033-40f5-9935-bb687b11a5b7 00:26:44.589 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize e227f838-d732-4727-8004-fd49a9a9c04d 30 00:26:44.847 05:38:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 5c6bb708-e033-40f5-9935-bb687b11a5b7 MY_CLONE 00:26:45.415 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=fac9245c-3514-48c4-9831-6fb9aac4ee60 00:26:45.415 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate fac9245c-3514-48c4-9831-6fb9aac4ee60 00:26:45.983 05:38:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 66101 00:26:54.121 Initializing NVMe Controllers 00:26:54.121 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:26:54.121 Controller IO queue size 128, less than required. 00:26:54.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:54.121 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:26:54.121 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:26:54.121 Initialization complete. Launching workers. 00:26:54.121 ======================================================== 00:26:54.121 Latency(us) 00:26:54.121 Device Information : IOPS MiB/s Average min max 00:26:54.121 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10984.80 42.91 11656.17 2500.53 56012.48 00:26:54.121 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10309.50 40.27 12426.82 1008.73 76618.09 00:26:54.121 ======================================================== 00:26:54.121 Total : 21294.30 83.18 12029.27 1008.73 76618.09 00:26:54.121 00:26:54.121 05:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:54.121 05:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e227f838-d732-4727-8004-fd49a9a9c04d 00:26:54.121 05:38:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 94d4e6c0-3ab4-45a8-bf6a-8ef03179bfde 00:26:54.380 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:26:54.380 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:26:54.380 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:26:54.380 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:54.380 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:26:54.380 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:54.380 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:26:54.380 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:54.380 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:54.380 rmmod nvme_tcp 00:26:54.639 rmmod nvme_fabrics 00:26:54.639 rmmod nvme_keyring 00:26:54.639 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:54.639 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:26:54.639 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:26:54.639 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65949 ']' 00:26:54.639 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65949 00:26:54.639 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 65949 ']' 00:26:54.639 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 65949 00:26:54.639 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:26:54.639 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:54.639 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65949 00:26:54.639 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:54.639 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:54.639 killing process with pid 65949 00:26:54.639 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65949' 00:26:54.639 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 65949 00:26:54.639 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 65949 00:26:54.898 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:54.898 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:54.898 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:54.898 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:26:54.898 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:26:54.898 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:54.898 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:26:54.898 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:54.899 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:54.899 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:54.899 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:54.899 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:54.899 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:54.899 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:54.899 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:54.899 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:54.899 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:54.899 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:54.899 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:54.899 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:54.899 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:54.899 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:55.158 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:55.158 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.158 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.158 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.158 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:26:55.158 00:26:55.158 real 0m16.181s 00:26:55.158 user 1m4.799s 00:26:55.158 sys 0m5.332s 00:26:55.158 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:55.158 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:55.158 ************************************ 00:26:55.158 END TEST nvmf_lvol 00:26:55.158 ************************************ 00:26:55.158 05:38:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:26:55.158 05:38:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:55.158 05:38:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:55.158 05:38:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:26:55.158 ************************************ 00:26:55.158 START TEST nvmf_lvs_grow 00:26:55.158 ************************************ 00:26:55.158 05:38:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:26:55.158 * Looking for test storage... 00:26:55.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:55.158 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:55.158 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:55.158 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:26:55.417 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:55.417 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.417 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.417 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:55.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.418 --rc genhtml_branch_coverage=1 00:26:55.418 --rc genhtml_function_coverage=1 00:26:55.418 --rc genhtml_legend=1 00:26:55.418 --rc geninfo_all_blocks=1 00:26:55.418 --rc geninfo_unexecuted_blocks=1 00:26:55.418 00:26:55.418 ' 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:55.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.418 --rc genhtml_branch_coverage=1 00:26:55.418 --rc genhtml_function_coverage=1 00:26:55.418 --rc genhtml_legend=1 00:26:55.418 --rc geninfo_all_blocks=1 00:26:55.418 --rc geninfo_unexecuted_blocks=1 00:26:55.418 00:26:55.418 ' 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:55.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.418 --rc genhtml_branch_coverage=1 00:26:55.418 --rc genhtml_function_coverage=1 00:26:55.418 --rc genhtml_legend=1 00:26:55.418 --rc geninfo_all_blocks=1 00:26:55.418 --rc geninfo_unexecuted_blocks=1 00:26:55.418 00:26:55.418 ' 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:55.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.418 --rc genhtml_branch_coverage=1 00:26:55.418 --rc genhtml_function_coverage=1 00:26:55.418 --rc genhtml_legend=1 00:26:55.418 --rc geninfo_all_blocks=1 00:26:55.418 --rc geninfo_unexecuted_blocks=1 00:26:55.418 00:26:55.418 ' 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.418 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:55.419 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:55.419 Cannot find device "nvmf_init_br" 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:55.419 Cannot find device "nvmf_init_br2" 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:55.419 Cannot find device "nvmf_tgt_br" 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:55.419 Cannot find device "nvmf_tgt_br2" 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:55.419 Cannot find device "nvmf_init_br" 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:55.419 Cannot find device "nvmf_init_br2" 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:55.419 Cannot find device "nvmf_tgt_br" 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:55.419 Cannot find device "nvmf_tgt_br2" 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:55.419 Cannot find device "nvmf_br" 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:55.419 Cannot find device "nvmf_init_if" 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:55.419 Cannot find device "nvmf_init_if2" 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:55.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:55.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:55.419 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:55.679 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:55.679 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:55.679 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:26:55.680 00:26:55.680 --- 10.0.0.3 ping statistics --- 00:26:55.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.680 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:26:55.680 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:55.680 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:55.680 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:26:55.680 00:26:55.680 --- 10.0.0.4 ping statistics --- 00:26:55.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.680 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:26:55.680 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:55.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:26:55.680 00:26:55.680 --- 10.0.0.1 ping statistics --- 00:26:55.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.680 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:26:55.680 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:55.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:26:55.680 00:26:55.680 --- 10.0.0.2 ping statistics --- 00:26:55.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.680 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:26:55.680 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.680 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:26:55.680 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:55.680 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.680 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:55.680 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:55.680 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.680 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:55.680 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:55.939 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:26:55.939 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:55.939 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:55.939 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:55.939 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=66528 00:26:55.939 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 66528 00:26:55.939 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:55.939 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 66528 ']' 00:26:55.939 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.939 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:55.939 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.939 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:55.939 05:38:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:55.939 [2024-11-20 05:38:15.688392] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:55.939 [2024-11-20 05:38:15.688520] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.939 [2024-11-20 05:38:15.845978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.198 [2024-11-20 05:38:15.906235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.198 [2024-11-20 05:38:15.906279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.198 [2024-11-20 05:38:15.906289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.198 [2024-11-20 05:38:15.906298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.198 [2024-11-20 05:38:15.906305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.198 [2024-11-20 05:38:15.906596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.198 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:56.198 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:26:56.198 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:56.198 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:56.198 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:56.198 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.198 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:56.457 [2024-11-20 05:38:16.335149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.457 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:26:56.457 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:56.457 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:56.457 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:56.457 ************************************ 00:26:56.457 START TEST lvs_grow_clean 00:26:56.457 ************************************ 00:26:56.457 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:26:56.457 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:26:56.457 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:26:56.457 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:26:56.457 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:26:56.457 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:26:56.457 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:26:56.457 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:56.716 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:56.716 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:26:56.974 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:26:56.974 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:26:57.233 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=796d5754-1e06-401f-ba5a-c1ef0adcbe88 00:26:57.233 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796d5754-1e06-401f-ba5a-c1ef0adcbe88 00:26:57.233 05:38:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:26:57.492 05:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:26:57.492 05:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:26:57.492 05:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 796d5754-1e06-401f-ba5a-c1ef0adcbe88 lvol 150 00:26:57.752 05:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9e2036b7-5bfc-4ea7-ad51-d85abcc50a5b 00:26:57.752 05:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:26:57.752 05:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:26:58.010 [2024-11-20 05:38:17.751244] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:26:58.010 [2024-11-20 05:38:17.751317] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:26:58.010 true 00:26:58.010 05:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796d5754-1e06-401f-ba5a-c1ef0adcbe88 00:26:58.010 05:38:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:26:58.270 05:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:26:58.270 05:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:58.527 05:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9e2036b7-5bfc-4ea7-ad51-d85abcc50a5b 00:26:58.786 05:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:59.046 [2024-11-20 05:38:18.775729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:59.046 05:38:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:59.304 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66682 00:26:59.304 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:26:59.304 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:59.304 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66682 /var/tmp/bdevperf.sock 00:26:59.304 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 66682 ']' 00:26:59.304 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:59.304 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:59.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:59.304 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:59.304 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:59.304 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:26:59.304 [2024-11-20 05:38:19.093771] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:59.304 [2024-11-20 05:38:19.093843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66682 ] 00:26:59.562 [2024-11-20 05:38:19.233101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.562 [2024-11-20 05:38:19.282771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.562 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:59.562 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:26:59.562 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:26:59.819 Nvme0n1 00:26:59.819 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:00.077 [ 00:27:00.077 { 00:27:00.077 "aliases": [ 00:27:00.077 "9e2036b7-5bfc-4ea7-ad51-d85abcc50a5b" 00:27:00.077 ], 00:27:00.077 "assigned_rate_limits": { 00:27:00.077 "r_mbytes_per_sec": 0, 00:27:00.077 "rw_ios_per_sec": 0, 00:27:00.077 "rw_mbytes_per_sec": 0, 00:27:00.077 "w_mbytes_per_sec": 0 00:27:00.077 }, 00:27:00.077 "block_size": 4096, 00:27:00.077 "claimed": false, 00:27:00.077 "driver_specific": { 00:27:00.077 "mp_policy": "active_passive", 00:27:00.077 "nvme": [ 00:27:00.077 { 00:27:00.077 "ctrlr_data": { 00:27:00.077 "ana_reporting": false, 00:27:00.077 "cntlid": 1, 00:27:00.077 "firmware_revision": "25.01", 00:27:00.077 "model_number": "SPDK bdev Controller", 00:27:00.077 "multi_ctrlr": true, 00:27:00.077 "oacs": { 00:27:00.077 "firmware": 0, 00:27:00.077 "format": 0, 00:27:00.077 "ns_manage": 0, 00:27:00.077 "security": 0 00:27:00.077 }, 00:27:00.077 "serial_number": "SPDK0", 00:27:00.077 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:00.077 "vendor_id": "0x8086" 00:27:00.077 }, 00:27:00.077 "ns_data": { 00:27:00.077 "can_share": true, 00:27:00.077 "id": 1 00:27:00.077 }, 00:27:00.077 "trid": { 00:27:00.077 "adrfam": "IPv4", 00:27:00.077 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:00.077 "traddr": "10.0.0.3", 00:27:00.077 "trsvcid": "4420", 00:27:00.077 "trtype": "TCP" 00:27:00.077 }, 00:27:00.077 "vs": { 00:27:00.077 "nvme_version": "1.3" 00:27:00.077 } 00:27:00.077 } 00:27:00.077 ] 00:27:00.077 }, 00:27:00.077 "memory_domains": [ 00:27:00.077 { 00:27:00.077 "dma_device_id": "system", 00:27:00.077 "dma_device_type": 1 00:27:00.077 } 00:27:00.077 ], 00:27:00.077 "name": "Nvme0n1", 00:27:00.077 "num_blocks": 38912, 00:27:00.077 "numa_id": -1, 00:27:00.077 "product_name": "NVMe disk", 00:27:00.077 "supported_io_types": { 00:27:00.077 "abort": true, 00:27:00.077 "compare": true, 00:27:00.077 "compare_and_write": true, 00:27:00.077 "copy": true, 00:27:00.077 "flush": true, 00:27:00.077 "get_zone_info": false, 00:27:00.077 "nvme_admin": true, 00:27:00.077 "nvme_io": true, 00:27:00.077 "nvme_io_md": false, 00:27:00.077 "nvme_iov_md": false, 00:27:00.077 "read": true, 00:27:00.077 "reset": true, 00:27:00.077 "seek_data": false, 00:27:00.077 "seek_hole": false, 00:27:00.077 "unmap": true, 00:27:00.077 "write": true, 00:27:00.077 "write_zeroes": true, 00:27:00.077 "zcopy": false, 00:27:00.077 "zone_append": false, 00:27:00.077 "zone_management": false 00:27:00.077 }, 00:27:00.077 "uuid": "9e2036b7-5bfc-4ea7-ad51-d85abcc50a5b", 00:27:00.077 "zoned": false 00:27:00.077 } 00:27:00.077 ] 00:27:00.077 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66711 00:27:00.077 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:00.077 05:38:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:00.336 Running I/O for 10 seconds... 00:27:01.271 Latency(us) 00:27:01.271 [2024-11-20T05:38:21.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:01.271 Nvme0n1 : 1.00 9450.00 36.91 0.00 0.00 0.00 0.00 0.00 00:27:01.271 [2024-11-20T05:38:21.190Z] =================================================================================================================== 00:27:01.271 [2024-11-20T05:38:21.190Z] Total : 9450.00 36.91 0.00 0.00 0.00 0.00 0.00 00:27:01.271 00:27:02.205 05:38:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 796d5754-1e06-401f-ba5a-c1ef0adcbe88 00:27:02.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:02.205 Nvme0n1 : 2.00 9642.50 37.67 0.00 0.00 0.00 0.00 0.00 00:27:02.205 [2024-11-20T05:38:22.124Z] =================================================================================================================== 00:27:02.205 [2024-11-20T05:38:22.124Z] Total : 9642.50 37.67 0.00 0.00 0.00 0.00 0.00 00:27:02.206 00:27:02.463 true 00:27:02.463 05:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:02.463 05:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796d5754-1e06-401f-ba5a-c1ef0adcbe88 00:27:02.722 05:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:02.723 05:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:02.723 05:38:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66711 00:27:03.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:03.289 Nvme0n1 : 3.00 9564.67 37.36 0.00 0.00 0.00 0.00 0.00 00:27:03.289 [2024-11-20T05:38:23.208Z] =================================================================================================================== 00:27:03.289 [2024-11-20T05:38:23.208Z] Total : 9564.67 37.36 0.00 0.00 0.00 0.00 0.00 00:27:03.289 00:27:04.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:04.244 Nvme0n1 : 4.00 9233.50 36.07 0.00 0.00 0.00 0.00 0.00 00:27:04.244 [2024-11-20T05:38:24.163Z] =================================================================================================================== 00:27:04.244 [2024-11-20T05:38:24.163Z] Total : 9233.50 36.07 0.00 0.00 0.00 0.00 0.00 00:27:04.244 00:27:05.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:05.181 Nvme0n1 : 5.00 8982.00 35.09 0.00 0.00 0.00 0.00 0.00 00:27:05.181 [2024-11-20T05:38:25.100Z] =================================================================================================================== 00:27:05.181 [2024-11-20T05:38:25.100Z] Total : 8982.00 35.09 0.00 0.00 0.00 0.00 0.00 00:27:05.181 00:27:06.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:06.118 Nvme0n1 : 6.00 9014.33 35.21 0.00 0.00 0.00 0.00 0.00 00:27:06.118 [2024-11-20T05:38:26.037Z] =================================================================================================================== 00:27:06.118 [2024-11-20T05:38:26.037Z] Total : 9014.33 35.21 0.00 0.00 0.00 0.00 0.00 00:27:06.118 00:27:07.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:07.496 Nvme0n1 : 7.00 9032.29 35.28 0.00 0.00 0.00 0.00 0.00 00:27:07.496 [2024-11-20T05:38:27.415Z] =================================================================================================================== 00:27:07.496 [2024-11-20T05:38:27.415Z] Total : 9032.29 35.28 0.00 0.00 0.00 0.00 0.00 00:27:07.496 00:27:08.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:08.431 Nvme0n1 : 8.00 9037.62 35.30 0.00 0.00 0.00 0.00 0.00 00:27:08.431 [2024-11-20T05:38:28.350Z] =================================================================================================================== 00:27:08.431 [2024-11-20T05:38:28.350Z] Total : 9037.62 35.30 0.00 0.00 0.00 0.00 0.00 00:27:08.431 00:27:09.364 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:09.364 Nvme0n1 : 9.00 9043.44 35.33 0.00 0.00 0.00 0.00 0.00 00:27:09.364 [2024-11-20T05:38:29.283Z] =================================================================================================================== 00:27:09.364 [2024-11-20T05:38:29.283Z] Total : 9043.44 35.33 0.00 0.00 0.00 0.00 0.00 00:27:09.364 00:27:10.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:10.298 Nvme0n1 : 10.00 9042.50 35.32 0.00 0.00 0.00 0.00 0.00 00:27:10.298 [2024-11-20T05:38:30.217Z] =================================================================================================================== 00:27:10.298 [2024-11-20T05:38:30.217Z] Total : 9042.50 35.32 0.00 0.00 0.00 0.00 0.00 00:27:10.298 00:27:10.298 00:27:10.298 Latency(us) 00:27:10.298 [2024-11-20T05:38:30.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:10.298 Nvme0n1 : 10.01 9043.60 35.33 0.00 0.00 14149.86 4525.10 96369.13 00:27:10.298 [2024-11-20T05:38:30.217Z] =================================================================================================================== 00:27:10.298 [2024-11-20T05:38:30.217Z] Total : 9043.60 35.33 0.00 0.00 14149.86 4525.10 96369.13 00:27:10.298 { 00:27:10.298 "results": [ 00:27:10.298 { 00:27:10.298 "job": "Nvme0n1", 00:27:10.298 "core_mask": "0x2", 00:27:10.298 "workload": "randwrite", 00:27:10.298 "status": "finished", 00:27:10.298 "queue_depth": 128, 00:27:10.298 "io_size": 4096, 00:27:10.298 "runtime": 10.012934, 00:27:10.298 "iops": 9043.603003874789, 00:27:10.298 "mibps": 35.32657423388589, 00:27:10.298 "io_failed": 0, 00:27:10.298 "io_timeout": 0, 00:27:10.298 "avg_latency_us": 14149.862670900968, 00:27:10.298 "min_latency_us": 4525.104761904762, 00:27:10.298 "max_latency_us": 96369.12761904762 00:27:10.298 } 00:27:10.298 ], 00:27:10.298 "core_count": 1 00:27:10.298 } 00:27:10.298 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66682 00:27:10.298 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 66682 ']' 00:27:10.298 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 66682 00:27:10.298 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:27:10.298 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:10.298 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66682 00:27:10.298 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:10.298 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:10.298 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66682' 00:27:10.298 killing process with pid 66682 00:27:10.298 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 66682 00:27:10.298 Received shutdown signal, test time was about 10.000000 seconds 00:27:10.298 00:27:10.298 Latency(us) 00:27:10.298 [2024-11-20T05:38:30.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.298 [2024-11-20T05:38:30.217Z] =================================================================================================================== 00:27:10.298 [2024-11-20T05:38:30.217Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:10.298 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 66682 00:27:10.557 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:27:10.816 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:11.074 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796d5754-1e06-401f-ba5a-c1ef0adcbe88 00:27:11.075 05:38:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:27:11.334 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:27:11.334 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:27:11.334 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:11.594 [2024-11-20 05:38:31.291924] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:27:11.594 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796d5754-1e06-401f-ba5a-c1ef0adcbe88 00:27:11.594 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:27:11.594 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796d5754-1e06-401f-ba5a-c1ef0adcbe88 00:27:11.594 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:11.594 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:11.594 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:11.594 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:11.594 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:11.594 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:11.594 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:11.594 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:11.594 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796d5754-1e06-401f-ba5a-c1ef0adcbe88 00:27:11.852 2024/11/20 05:38:31 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:796d5754-1e06-401f-ba5a-c1ef0adcbe88], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:27:11.852 request: 00:27:11.852 { 00:27:11.852 "method": "bdev_lvol_get_lvstores", 00:27:11.852 "params": { 00:27:11.852 "uuid": "796d5754-1e06-401f-ba5a-c1ef0adcbe88" 00:27:11.852 } 00:27:11.852 } 00:27:11.852 Got JSON-RPC error response 00:27:11.852 GoRPCClient: error on JSON-RPC call 00:27:11.852 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:27:11.852 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:11.852 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:11.853 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:11.853 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:11.853 aio_bdev 00:27:12.115 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9e2036b7-5bfc-4ea7-ad51-d85abcc50a5b 00:27:12.115 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=9e2036b7-5bfc-4ea7-ad51-d85abcc50a5b 00:27:12.115 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:12.115 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:27:12.115 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:12.115 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:12.115 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:12.115 05:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9e2036b7-5bfc-4ea7-ad51-d85abcc50a5b -t 2000 00:27:12.374 [ 00:27:12.374 { 00:27:12.374 "aliases": [ 00:27:12.374 "lvs/lvol" 00:27:12.374 ], 00:27:12.374 "assigned_rate_limits": { 00:27:12.374 "r_mbytes_per_sec": 0, 00:27:12.374 "rw_ios_per_sec": 0, 00:27:12.374 "rw_mbytes_per_sec": 0, 00:27:12.374 "w_mbytes_per_sec": 0 00:27:12.374 }, 00:27:12.374 "block_size": 4096, 00:27:12.374 "claimed": false, 00:27:12.374 "driver_specific": { 00:27:12.374 "lvol": { 00:27:12.374 "base_bdev": "aio_bdev", 00:27:12.374 "clone": false, 00:27:12.374 "esnap_clone": false, 00:27:12.374 "lvol_store_uuid": "796d5754-1e06-401f-ba5a-c1ef0adcbe88", 00:27:12.374 "num_allocated_clusters": 38, 00:27:12.374 "snapshot": false, 00:27:12.374 "thin_provision": false 00:27:12.374 } 00:27:12.374 }, 00:27:12.374 "name": "9e2036b7-5bfc-4ea7-ad51-d85abcc50a5b", 00:27:12.374 "num_blocks": 38912, 00:27:12.374 "product_name": "Logical Volume", 00:27:12.374 "supported_io_types": { 00:27:12.374 "abort": false, 00:27:12.374 "compare": false, 00:27:12.374 "compare_and_write": false, 00:27:12.374 "copy": false, 00:27:12.374 "flush": false, 00:27:12.374 "get_zone_info": false, 00:27:12.374 "nvme_admin": false, 00:27:12.374 "nvme_io": false, 00:27:12.374 "nvme_io_md": false, 00:27:12.374 "nvme_iov_md": false, 00:27:12.374 "read": true, 00:27:12.374 "reset": true, 00:27:12.374 "seek_data": true, 00:27:12.374 "seek_hole": true, 00:27:12.374 "unmap": true, 00:27:12.374 "write": true, 00:27:12.374 "write_zeroes": true, 00:27:12.374 "zcopy": false, 00:27:12.374 "zone_append": false, 00:27:12.374 "zone_management": false 00:27:12.374 }, 00:27:12.374 "uuid": "9e2036b7-5bfc-4ea7-ad51-d85abcc50a5b", 00:27:12.374 "zoned": false 00:27:12.374 } 00:27:12.374 ] 00:27:12.374 05:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:27:12.374 05:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:27:12.374 05:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796d5754-1e06-401f-ba5a-c1ef0adcbe88 00:27:12.634 05:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:27:12.634 05:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:27:12.634 05:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 796d5754-1e06-401f-ba5a-c1ef0adcbe88 00:27:12.893 05:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:27:12.893 05:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9e2036b7-5bfc-4ea7-ad51-d85abcc50a5b 00:27:13.153 05:38:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 796d5754-1e06-401f-ba5a-c1ef0adcbe88 00:27:13.411 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:13.670 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:13.929 00:27:13.929 real 0m17.473s 00:27:13.929 user 0m15.997s 00:27:13.929 sys 0m2.816s 00:27:13.929 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:13.929 ************************************ 00:27:13.929 END TEST lvs_grow_clean 00:27:13.929 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:13.929 ************************************ 00:27:14.188 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:27:14.188 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:14.188 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:14.188 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:14.188 ************************************ 00:27:14.188 START TEST lvs_grow_dirty 00:27:14.188 ************************************ 00:27:14.188 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:27:14.188 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:14.188 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:14.188 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:14.188 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:14.188 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:14.188 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:14.188 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:14.188 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:14.189 05:38:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:14.447 05:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:14.447 05:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:14.706 05:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=76e0e185-4505-4866-a16f-2086a877f554 00:27:14.706 05:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76e0e185-4505-4866-a16f-2086a877f554 00:27:14.706 05:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:14.965 05:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:14.965 05:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:14.965 05:38:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 76e0e185-4505-4866-a16f-2086a877f554 lvol 150 00:27:15.223 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0a9895ab-7b72-4c3a-b34d-50decbc9bfea 00:27:15.223 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:15.223 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:15.482 [2024-11-20 05:38:35.331226] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:15.482 [2024-11-20 05:38:35.331296] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:15.482 true 00:27:15.482 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:15.482 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76e0e185-4505-4866-a16f-2086a877f554 00:27:15.810 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:15.810 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:16.069 05:38:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0a9895ab-7b72-4c3a-b34d-50decbc9bfea 00:27:16.327 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:16.327 [2024-11-20 05:38:36.215669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:16.327 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:27:16.586 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:16.586 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=67108 00:27:16.586 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:16.586 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 67108 /var/tmp/bdevperf.sock 00:27:16.586 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 67108 ']' 00:27:16.586 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:16.586 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:16.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:16.586 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:16.586 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:16.586 05:38:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:16.586 [2024-11-20 05:38:36.480988] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:16.586 [2024-11-20 05:38:36.481059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67108 ] 00:27:16.905 [2024-11-20 05:38:36.618264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.905 [2024-11-20 05:38:36.672135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.840 05:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:17.840 05:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:27:17.840 05:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:17.840 Nvme0n1 00:27:18.099 05:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:18.099 [ 00:27:18.099 { 00:27:18.099 "aliases": [ 00:27:18.099 "0a9895ab-7b72-4c3a-b34d-50decbc9bfea" 00:27:18.099 ], 00:27:18.099 "assigned_rate_limits": { 00:27:18.099 "r_mbytes_per_sec": 0, 00:27:18.099 "rw_ios_per_sec": 0, 00:27:18.099 "rw_mbytes_per_sec": 0, 00:27:18.099 "w_mbytes_per_sec": 0 00:27:18.099 }, 00:27:18.099 "block_size": 4096, 00:27:18.099 "claimed": false, 00:27:18.099 "driver_specific": { 00:27:18.099 "mp_policy": "active_passive", 00:27:18.099 "nvme": [ 00:27:18.099 { 00:27:18.099 "ctrlr_data": { 00:27:18.099 "ana_reporting": false, 00:27:18.099 "cntlid": 1, 00:27:18.099 "firmware_revision": "25.01", 00:27:18.099 "model_number": "SPDK bdev Controller", 00:27:18.099 "multi_ctrlr": true, 00:27:18.099 "oacs": { 00:27:18.099 "firmware": 0, 00:27:18.099 "format": 0, 00:27:18.099 "ns_manage": 0, 00:27:18.099 "security": 0 00:27:18.099 }, 00:27:18.099 "serial_number": "SPDK0", 00:27:18.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:18.099 "vendor_id": "0x8086" 00:27:18.099 }, 00:27:18.099 "ns_data": { 00:27:18.099 "can_share": true, 00:27:18.099 "id": 1 00:27:18.099 }, 00:27:18.099 "trid": { 00:27:18.099 "adrfam": "IPv4", 00:27:18.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:18.099 "traddr": "10.0.0.3", 00:27:18.099 "trsvcid": "4420", 00:27:18.099 "trtype": "TCP" 00:27:18.099 }, 00:27:18.099 "vs": { 00:27:18.099 "nvme_version": "1.3" 00:27:18.099 } 00:27:18.099 } 00:27:18.099 ] 00:27:18.099 }, 00:27:18.099 "memory_domains": [ 00:27:18.099 { 00:27:18.099 "dma_device_id": "system", 00:27:18.099 "dma_device_type": 1 00:27:18.099 } 00:27:18.099 ], 00:27:18.099 "name": "Nvme0n1", 00:27:18.099 "num_blocks": 38912, 00:27:18.099 "numa_id": -1, 00:27:18.099 "product_name": "NVMe disk", 00:27:18.099 "supported_io_types": { 00:27:18.099 "abort": true, 00:27:18.099 "compare": true, 00:27:18.099 "compare_and_write": true, 00:27:18.099 "copy": true, 00:27:18.099 "flush": true, 00:27:18.099 "get_zone_info": false, 00:27:18.099 "nvme_admin": true, 00:27:18.099 "nvme_io": true, 00:27:18.099 "nvme_io_md": false, 00:27:18.099 "nvme_iov_md": false, 00:27:18.099 "read": true, 00:27:18.099 "reset": true, 00:27:18.099 "seek_data": false, 00:27:18.099 "seek_hole": false, 00:27:18.099 "unmap": true, 00:27:18.099 "write": true, 00:27:18.099 "write_zeroes": true, 00:27:18.099 "zcopy": false, 00:27:18.099 "zone_append": false, 00:27:18.099 "zone_management": false 00:27:18.099 }, 00:27:18.099 "uuid": "0a9895ab-7b72-4c3a-b34d-50decbc9bfea", 00:27:18.099 "zoned": false 00:27:18.099 } 00:27:18.099 ] 00:27:18.099 05:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:18.099 05:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67150 00:27:18.099 05:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:18.359 Running I/O for 10 seconds... 00:27:19.293 Latency(us) 00:27:19.293 [2024-11-20T05:38:39.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:19.293 Nvme0n1 : 1.00 10340.00 40.39 0.00 0.00 0.00 0.00 0.00 00:27:19.293 [2024-11-20T05:38:39.212Z] =================================================================================================================== 00:27:19.293 [2024-11-20T05:38:39.212Z] Total : 10340.00 40.39 0.00 0.00 0.00 0.00 0.00 00:27:19.293 00:27:20.227 05:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 76e0e185-4505-4866-a16f-2086a877f554 00:27:20.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:20.227 Nvme0n1 : 2.00 10226.50 39.95 0.00 0.00 0.00 0.00 0.00 00:27:20.227 [2024-11-20T05:38:40.146Z] =================================================================================================================== 00:27:20.227 [2024-11-20T05:38:40.146Z] Total : 10226.50 39.95 0.00 0.00 0.00 0.00 0.00 00:27:20.227 00:27:20.485 true 00:27:20.485 05:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:20.485 05:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76e0e185-4505-4866-a16f-2086a877f554 00:27:20.744 05:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:20.744 05:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:20.744 05:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 67150 00:27:21.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:21.312 Nvme0n1 : 3.00 10013.33 39.11 0.00 0.00 0.00 0.00 0.00 00:27:21.312 [2024-11-20T05:38:41.231Z] =================================================================================================================== 00:27:21.312 [2024-11-20T05:38:41.231Z] Total : 10013.33 39.11 0.00 0.00 0.00 0.00 0.00 00:27:21.312 00:27:22.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:22.247 Nvme0n1 : 4.00 9860.25 38.52 0.00 0.00 0.00 0.00 0.00 00:27:22.247 [2024-11-20T05:38:42.166Z] =================================================================================================================== 00:27:22.247 [2024-11-20T05:38:42.166Z] Total : 9860.25 38.52 0.00 0.00 0.00 0.00 0.00 00:27:22.247 00:27:23.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:23.182 Nvme0n1 : 5.00 9771.80 38.17 0.00 0.00 0.00 0.00 0.00 00:27:23.182 [2024-11-20T05:38:43.101Z] =================================================================================================================== 00:27:23.182 [2024-11-20T05:38:43.101Z] Total : 9771.80 38.17 0.00 0.00 0.00 0.00 0.00 00:27:23.182 00:27:24.561 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:24.561 Nvme0n1 : 6.00 9269.50 36.21 0.00 0.00 0.00 0.00 0.00 00:27:24.561 [2024-11-20T05:38:44.480Z] =================================================================================================================== 00:27:24.561 [2024-11-20T05:38:44.480Z] Total : 9269.50 36.21 0.00 0.00 0.00 0.00 0.00 00:27:24.561 00:27:25.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:25.497 Nvme0n1 : 7.00 9212.14 35.98 0.00 0.00 0.00 0.00 0.00 00:27:25.497 [2024-11-20T05:38:45.416Z] =================================================================================================================== 00:27:25.497 [2024-11-20T05:38:45.416Z] Total : 9212.14 35.98 0.00 0.00 0.00 0.00 0.00 00:27:25.497 00:27:26.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:26.435 Nvme0n1 : 8.00 9142.50 35.71 0.00 0.00 0.00 0.00 0.00 00:27:26.435 [2024-11-20T05:38:46.354Z] =================================================================================================================== 00:27:26.435 [2024-11-20T05:38:46.354Z] Total : 9142.50 35.71 0.00 0.00 0.00 0.00 0.00 00:27:26.435 00:27:27.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:27.371 Nvme0n1 : 9.00 9136.78 35.69 0.00 0.00 0.00 0.00 0.00 00:27:27.371 [2024-11-20T05:38:47.290Z] =================================================================================================================== 00:27:27.371 [2024-11-20T05:38:47.290Z] Total : 9136.78 35.69 0.00 0.00 0.00 0.00 0.00 00:27:27.371 00:27:28.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:28.305 Nvme0n1 : 10.00 9118.10 35.62 0.00 0.00 0.00 0.00 0.00 00:27:28.305 [2024-11-20T05:38:48.224Z] =================================================================================================================== 00:27:28.305 [2024-11-20T05:38:48.224Z] Total : 9118.10 35.62 0.00 0.00 0.00 0.00 0.00 00:27:28.305 00:27:28.305 00:27:28.305 Latency(us) 00:27:28.305 [2024-11-20T05:38:48.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:28.305 Nvme0n1 : 10.00 9127.99 35.66 0.00 0.00 14018.70 5118.05 259647.39 00:27:28.305 [2024-11-20T05:38:48.224Z] =================================================================================================================== 00:27:28.305 [2024-11-20T05:38:48.224Z] Total : 9127.99 35.66 0.00 0.00 14018.70 5118.05 259647.39 00:27:28.305 { 00:27:28.305 "results": [ 00:27:28.305 { 00:27:28.305 "job": "Nvme0n1", 00:27:28.305 "core_mask": "0x2", 00:27:28.305 "workload": "randwrite", 00:27:28.305 "status": "finished", 00:27:28.305 "queue_depth": 128, 00:27:28.305 "io_size": 4096, 00:27:28.305 "runtime": 10.003188, 00:27:28.305 "iops": 9127.989996789023, 00:27:28.305 "mibps": 35.65621092495712, 00:27:28.305 "io_failed": 0, 00:27:28.305 "io_timeout": 0, 00:27:28.305 "avg_latency_us": 14018.7041832313, 00:27:28.305 "min_latency_us": 5118.049523809524, 00:27:28.305 "max_latency_us": 259647.39047619049 00:27:28.305 } 00:27:28.305 ], 00:27:28.305 "core_count": 1 00:27:28.305 } 00:27:28.305 05:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 67108 00:27:28.305 05:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 67108 ']' 00:27:28.305 05:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 67108 00:27:28.306 05:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:27:28.306 05:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:28.306 05:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67108 00:27:28.306 05:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:28.306 05:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:28.306 killing process with pid 67108 00:27:28.306 05:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67108' 00:27:28.306 05:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 67108 00:27:28.306 Received shutdown signal, test time was about 10.000000 seconds 00:27:28.306 00:27:28.306 Latency(us) 00:27:28.306 [2024-11-20T05:38:48.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.306 [2024-11-20T05:38:48.225Z] =================================================================================================================== 00:27:28.306 [2024-11-20T05:38:48.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:28.306 05:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 67108 00:27:28.565 05:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:27:28.825 05:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:29.083 05:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:27:29.083 05:38:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76e0e185-4505-4866-a16f-2086a877f554 00:27:29.349 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:27:29.349 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:27:29.349 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66528 00:27:29.349 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66528 00:27:29.349 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66528 Killed "${NVMF_APP[@]}" "$@" 00:27:29.349 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:27:29.349 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:27:29.349 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:29.349 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:29.349 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:29.349 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:29.350 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=67318 00:27:29.350 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 67318 00:27:29.350 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 67318 ']' 00:27:29.350 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.350 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:29.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.350 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.350 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:29.350 05:38:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:29.612 [2024-11-20 05:38:49.263557] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:29.612 [2024-11-20 05:38:49.263874] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.612 [2024-11-20 05:38:49.406319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.612 [2024-11-20 05:38:49.458092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.612 [2024-11-20 05:38:49.458143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.612 [2024-11-20 05:38:49.458154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.612 [2024-11-20 05:38:49.458164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.612 [2024-11-20 05:38:49.458182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.612 [2024-11-20 05:38:49.458506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.549 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:30.549 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:27:30.549 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:30.549 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:30.549 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:30.549 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.549 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:30.808 [2024-11-20 05:38:50.555799] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:30.808 [2024-11-20 05:38:50.556247] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:30.808 [2024-11-20 05:38:50.556505] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:30.808 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:27:30.808 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0a9895ab-7b72-4c3a-b34d-50decbc9bfea 00:27:30.808 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=0a9895ab-7b72-4c3a-b34d-50decbc9bfea 00:27:30.808 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:30.808 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:27:30.808 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:30.808 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:30.808 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:31.067 05:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a9895ab-7b72-4c3a-b34d-50decbc9bfea -t 2000 00:27:31.326 [ 00:27:31.326 { 00:27:31.326 "aliases": [ 00:27:31.326 "lvs/lvol" 00:27:31.326 ], 00:27:31.326 "assigned_rate_limits": { 00:27:31.326 "r_mbytes_per_sec": 0, 00:27:31.326 "rw_ios_per_sec": 0, 00:27:31.326 "rw_mbytes_per_sec": 0, 00:27:31.326 "w_mbytes_per_sec": 0 00:27:31.326 }, 00:27:31.326 "block_size": 4096, 00:27:31.326 "claimed": false, 00:27:31.326 "driver_specific": { 00:27:31.326 "lvol": { 00:27:31.326 "base_bdev": "aio_bdev", 00:27:31.326 "clone": false, 00:27:31.326 "esnap_clone": false, 00:27:31.326 "lvol_store_uuid": "76e0e185-4505-4866-a16f-2086a877f554", 00:27:31.326 "num_allocated_clusters": 38, 00:27:31.326 "snapshot": false, 00:27:31.326 "thin_provision": false 00:27:31.326 } 00:27:31.326 }, 00:27:31.326 "name": "0a9895ab-7b72-4c3a-b34d-50decbc9bfea", 00:27:31.326 "num_blocks": 38912, 00:27:31.326 "product_name": "Logical Volume", 00:27:31.326 "supported_io_types": { 00:27:31.326 "abort": false, 00:27:31.326 "compare": false, 00:27:31.326 "compare_and_write": false, 00:27:31.326 "copy": false, 00:27:31.326 "flush": false, 00:27:31.326 "get_zone_info": false, 00:27:31.326 "nvme_admin": false, 00:27:31.326 "nvme_io": false, 00:27:31.326 "nvme_io_md": false, 00:27:31.326 "nvme_iov_md": false, 00:27:31.326 "read": true, 00:27:31.327 "reset": true, 00:27:31.327 "seek_data": true, 00:27:31.327 "seek_hole": true, 00:27:31.327 "unmap": true, 00:27:31.327 "write": true, 00:27:31.327 "write_zeroes": true, 00:27:31.327 "zcopy": false, 00:27:31.327 "zone_append": false, 00:27:31.327 "zone_management": false 00:27:31.327 }, 00:27:31.327 "uuid": "0a9895ab-7b72-4c3a-b34d-50decbc9bfea", 00:27:31.327 "zoned": false 00:27:31.327 } 00:27:31.327 ] 00:27:31.327 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:27:31.327 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:27:31.327 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76e0e185-4505-4866-a16f-2086a877f554 00:27:31.585 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:27:31.585 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76e0e185-4505-4866-a16f-2086a877f554 00:27:31.585 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:27:31.844 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:27:31.844 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:32.103 [2024-11-20 05:38:51.825360] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:27:32.103 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76e0e185-4505-4866-a16f-2086a877f554 00:27:32.103 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:27:32.103 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76e0e185-4505-4866-a16f-2086a877f554 00:27:32.103 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:32.103 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:32.103 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:32.103 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:32.103 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:32.103 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:32.103 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:32.103 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:32.103 05:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76e0e185-4505-4866-a16f-2086a877f554 00:27:32.362 2024/11/20 05:38:52 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:76e0e185-4505-4866-a16f-2086a877f554], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:27:32.362 request: 00:27:32.362 { 00:27:32.362 "method": "bdev_lvol_get_lvstores", 00:27:32.362 "params": { 00:27:32.362 "uuid": "76e0e185-4505-4866-a16f-2086a877f554" 00:27:32.362 } 00:27:32.362 } 00:27:32.362 Got JSON-RPC error response 00:27:32.362 GoRPCClient: error on JSON-RPC call 00:27:32.362 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:27:32.362 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:32.362 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:32.362 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:32.362 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:32.621 aio_bdev 00:27:32.621 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0a9895ab-7b72-4c3a-b34d-50decbc9bfea 00:27:32.621 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=0a9895ab-7b72-4c3a-b34d-50decbc9bfea 00:27:32.621 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:27:32.621 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:27:32.621 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:27:32.621 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:27:32.621 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:32.880 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a9895ab-7b72-4c3a-b34d-50decbc9bfea -t 2000 00:27:33.139 [ 00:27:33.139 { 00:27:33.139 "aliases": [ 00:27:33.139 "lvs/lvol" 00:27:33.139 ], 00:27:33.139 "assigned_rate_limits": { 00:27:33.139 "r_mbytes_per_sec": 0, 00:27:33.139 "rw_ios_per_sec": 0, 00:27:33.139 "rw_mbytes_per_sec": 0, 00:27:33.139 "w_mbytes_per_sec": 0 00:27:33.139 }, 00:27:33.139 "block_size": 4096, 00:27:33.139 "claimed": false, 00:27:33.139 "driver_specific": { 00:27:33.139 "lvol": { 00:27:33.139 "base_bdev": "aio_bdev", 00:27:33.139 "clone": false, 00:27:33.139 "esnap_clone": false, 00:27:33.139 "lvol_store_uuid": "76e0e185-4505-4866-a16f-2086a877f554", 00:27:33.139 "num_allocated_clusters": 38, 00:27:33.139 "snapshot": false, 00:27:33.139 "thin_provision": false 00:27:33.139 } 00:27:33.139 }, 00:27:33.139 "name": "0a9895ab-7b72-4c3a-b34d-50decbc9bfea", 00:27:33.139 "num_blocks": 38912, 00:27:33.139 "product_name": "Logical Volume", 00:27:33.139 "supported_io_types": { 00:27:33.139 "abort": false, 00:27:33.139 "compare": false, 00:27:33.139 "compare_and_write": false, 00:27:33.139 "copy": false, 00:27:33.139 "flush": false, 00:27:33.139 "get_zone_info": false, 00:27:33.139 "nvme_admin": false, 00:27:33.139 "nvme_io": false, 00:27:33.139 "nvme_io_md": false, 00:27:33.139 "nvme_iov_md": false, 00:27:33.139 "read": true, 00:27:33.139 "reset": true, 00:27:33.139 "seek_data": true, 00:27:33.139 "seek_hole": true, 00:27:33.139 "unmap": true, 00:27:33.139 "write": true, 00:27:33.139 "write_zeroes": true, 00:27:33.139 "zcopy": false, 00:27:33.139 "zone_append": false, 00:27:33.139 "zone_management": false 00:27:33.139 }, 00:27:33.139 "uuid": "0a9895ab-7b72-4c3a-b34d-50decbc9bfea", 00:27:33.139 "zoned": false 00:27:33.139 } 00:27:33.139 ] 00:27:33.139 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:27:33.139 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76e0e185-4505-4866-a16f-2086a877f554 00:27:33.139 05:38:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:27:33.398 05:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:27:33.398 05:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 76e0e185-4505-4866-a16f-2086a877f554 00:27:33.398 05:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:27:33.657 05:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:27:33.657 05:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0a9895ab-7b72-4c3a-b34d-50decbc9bfea 00:27:33.988 05:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 76e0e185-4505-4866-a16f-2086a877f554 00:27:34.267 05:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:34.526 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:27:35.094 ************************************ 00:27:35.094 END TEST lvs_grow_dirty 00:27:35.094 ************************************ 00:27:35.094 00:27:35.094 real 0m20.921s 00:27:35.094 user 0m40.277s 00:27:35.094 sys 0m7.893s 00:27:35.094 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:35.094 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:35.094 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:27:35.094 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:27:35.094 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:27:35.094 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:27:35.094 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:35.094 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:27:35.094 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:27:35.094 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:27:35.094 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:35.094 nvmf_trace.0 00:27:35.094 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:27:35.094 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:27:35.094 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:35.094 05:38:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:27:35.353 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.353 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:27:35.353 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.353 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.353 rmmod nvme_tcp 00:27:35.353 rmmod nvme_fabrics 00:27:35.353 rmmod nvme_keyring 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 67318 ']' 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 67318 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 67318 ']' 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 67318 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67318 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:35.612 killing process with pid 67318 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67318' 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 67318 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 67318 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:35.612 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:27:35.872 ************************************ 00:27:35.872 END TEST nvmf_lvs_grow 00:27:35.872 ************************************ 00:27:35.872 00:27:35.872 real 0m40.831s 00:27:35.872 user 1m3.169s 00:27:35.872 sys 0m11.717s 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:35.872 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:36.132 05:38:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:27:36.132 05:38:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:36.132 05:38:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:36.132 05:38:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:27:36.132 ************************************ 00:27:36.132 START TEST nvmf_bdev_io_wait 00:27:36.132 ************************************ 00:27:36.132 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:27:36.132 * Looking for test storage... 00:27:36.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:36.132 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:36.132 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:27:36.132 05:38:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:27:36.132 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:36.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.133 --rc genhtml_branch_coverage=1 00:27:36.133 --rc genhtml_function_coverage=1 00:27:36.133 --rc genhtml_legend=1 00:27:36.133 --rc geninfo_all_blocks=1 00:27:36.133 --rc geninfo_unexecuted_blocks=1 00:27:36.133 00:27:36.133 ' 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:36.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.133 --rc genhtml_branch_coverage=1 00:27:36.133 --rc genhtml_function_coverage=1 00:27:36.133 --rc genhtml_legend=1 00:27:36.133 --rc geninfo_all_blocks=1 00:27:36.133 --rc geninfo_unexecuted_blocks=1 00:27:36.133 00:27:36.133 ' 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:36.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.133 --rc genhtml_branch_coverage=1 00:27:36.133 --rc genhtml_function_coverage=1 00:27:36.133 --rc genhtml_legend=1 00:27:36.133 --rc geninfo_all_blocks=1 00:27:36.133 --rc geninfo_unexecuted_blocks=1 00:27:36.133 00:27:36.133 ' 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:36.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.133 --rc genhtml_branch_coverage=1 00:27:36.133 --rc genhtml_function_coverage=1 00:27:36.133 --rc genhtml_legend=1 00:27:36.133 --rc geninfo_all_blocks=1 00:27:36.133 --rc geninfo_unexecuted_blocks=1 00:27:36.133 00:27:36.133 ' 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:36.133 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:36.133 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:36.393 Cannot find device "nvmf_init_br" 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:36.393 Cannot find device "nvmf_init_br2" 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:36.393 Cannot find device "nvmf_tgt_br" 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:36.393 Cannot find device "nvmf_tgt_br2" 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:36.393 Cannot find device "nvmf_init_br" 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:36.393 Cannot find device "nvmf_init_br2" 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:36.393 Cannot find device "nvmf_tgt_br" 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:36.393 Cannot find device "nvmf_tgt_br2" 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:36.393 Cannot find device "nvmf_br" 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:36.393 Cannot find device "nvmf_init_if" 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:36.393 Cannot find device "nvmf_init_if2" 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:36.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:36.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:36.393 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:36.394 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:36.394 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:36.394 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:36.394 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:36.394 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:36.394 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:36.653 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:36.653 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:27:36.653 00:27:36.653 --- 10.0.0.3 ping statistics --- 00:27:36.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.653 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:36.653 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:36.653 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:27:36.653 00:27:36.653 --- 10.0.0.4 ping statistics --- 00:27:36.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.653 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:36.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:27:36.653 00:27:36.653 --- 10.0.0.1 ping statistics --- 00:27:36.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.653 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:36.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:27:36.653 00:27:36.653 --- 10.0.0.2 ping statistics --- 00:27:36.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.653 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:36.653 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=67796 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 67796 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 67796 ']' 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:36.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:36.654 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:36.912 [2024-11-20 05:38:56.579636] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:36.912 [2024-11-20 05:38:56.579743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.912 [2024-11-20 05:38:56.742523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:36.912 [2024-11-20 05:38:56.808918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.912 [2024-11-20 05:38:56.808996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.912 [2024-11-20 05:38:56.809019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.912 [2024-11-20 05:38:56.809038] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.912 [2024-11-20 05:38:56.809056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.912 [2024-11-20 05:38:56.810221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.912 [2024-11-20 05:38:56.810321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.912 [2024-11-20 05:38:56.810485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.912 [2024-11-20 05:38:56.810491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:37.171 [2024-11-20 05:38:56.973950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.171 05:38:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:37.171 Malloc0 00:27:37.171 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.171 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:37.171 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.171 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:37.171 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.171 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:37.171 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.171 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:37.171 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.171 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:37.171 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.171 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:37.171 [2024-11-20 05:38:57.030205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:37.171 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.171 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67831 00:27:37.171 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67833 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:37.172 { 00:27:37.172 "params": { 00:27:37.172 "name": "Nvme$subsystem", 00:27:37.172 "trtype": "$TEST_TRANSPORT", 00:27:37.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.172 "adrfam": "ipv4", 00:27:37.172 "trsvcid": "$NVMF_PORT", 00:27:37.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.172 "hdgst": ${hdgst:-false}, 00:27:37.172 "ddgst": ${ddgst:-false} 00:27:37.172 }, 00:27:37.172 "method": "bdev_nvme_attach_controller" 00:27:37.172 } 00:27:37.172 EOF 00:27:37.172 )") 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67835 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:37.172 { 00:27:37.172 "params": { 00:27:37.172 "name": "Nvme$subsystem", 00:27:37.172 "trtype": "$TEST_TRANSPORT", 00:27:37.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.172 "adrfam": "ipv4", 00:27:37.172 "trsvcid": "$NVMF_PORT", 00:27:37.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.172 "hdgst": ${hdgst:-false}, 00:27:37.172 "ddgst": ${ddgst:-false} 00:27:37.172 }, 00:27:37.172 "method": "bdev_nvme_attach_controller" 00:27:37.172 } 00:27:37.172 EOF 00:27:37.172 )") 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67837 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:37.172 { 00:27:37.172 "params": { 00:27:37.172 "name": "Nvme$subsystem", 00:27:37.172 "trtype": "$TEST_TRANSPORT", 00:27:37.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.172 "adrfam": "ipv4", 00:27:37.172 "trsvcid": "$NVMF_PORT", 00:27:37.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.172 "hdgst": ${hdgst:-false}, 00:27:37.172 "ddgst": ${ddgst:-false} 00:27:37.172 }, 00:27:37.172 "method": "bdev_nvme_attach_controller" 00:27:37.172 } 00:27:37.172 EOF 00:27:37.172 )") 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:37.172 "params": { 00:27:37.172 "name": "Nvme1", 00:27:37.172 "trtype": "tcp", 00:27:37.172 "traddr": "10.0.0.3", 00:27:37.172 "adrfam": "ipv4", 00:27:37.172 "trsvcid": "4420", 00:27:37.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:37.172 "hdgst": false, 00:27:37.172 "ddgst": false 00:27:37.172 }, 00:27:37.172 "method": "bdev_nvme_attach_controller" 00:27:37.172 }' 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:37.172 { 00:27:37.172 "params": { 00:27:37.172 "name": "Nvme$subsystem", 00:27:37.172 "trtype": "$TEST_TRANSPORT", 00:27:37.172 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.172 "adrfam": "ipv4", 00:27:37.172 "trsvcid": "$NVMF_PORT", 00:27:37.172 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.172 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.172 "hdgst": ${hdgst:-false}, 00:27:37.172 "ddgst": ${ddgst:-false} 00:27:37.172 }, 00:27:37.172 "method": "bdev_nvme_attach_controller" 00:27:37.172 } 00:27:37.172 EOF 00:27:37.172 )") 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:37.172 "params": { 00:27:37.172 "name": "Nvme1", 00:27:37.172 "trtype": "tcp", 00:27:37.172 "traddr": "10.0.0.3", 00:27:37.172 "adrfam": "ipv4", 00:27:37.172 "trsvcid": "4420", 00:27:37.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:37.172 "hdgst": false, 00:27:37.172 "ddgst": false 00:27:37.172 }, 00:27:37.172 "method": "bdev_nvme_attach_controller" 00:27:37.172 }' 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:37.172 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:37.172 "params": { 00:27:37.172 "name": "Nvme1", 00:27:37.172 "trtype": "tcp", 00:27:37.172 "traddr": "10.0.0.3", 00:27:37.172 "adrfam": "ipv4", 00:27:37.172 "trsvcid": "4420", 00:27:37.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:37.172 "hdgst": false, 00:27:37.173 "ddgst": false 00:27:37.173 }, 00:27:37.173 "method": "bdev_nvme_attach_controller" 00:27:37.173 }' 00:27:37.173 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:37.173 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:37.173 "params": { 00:27:37.173 "name": "Nvme1", 00:27:37.173 "trtype": "tcp", 00:27:37.173 "traddr": "10.0.0.3", 00:27:37.173 "adrfam": "ipv4", 00:27:37.173 "trsvcid": "4420", 00:27:37.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:37.173 "hdgst": false, 00:27:37.173 "ddgst": false 00:27:37.173 }, 00:27:37.173 "method": "bdev_nvme_attach_controller" 00:27:37.173 }' 00:27:37.432 05:38:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67831 00:27:37.432 [2024-11-20 05:38:57.120681] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:37.432 [2024-11-20 05:38:57.120787] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:27:37.432 [2024-11-20 05:38:57.127406] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:37.432 [2024-11-20 05:38:57.127548] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:27:37.432 [2024-11-20 05:38:57.128115] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:37.432 [2024-11-20 05:38:57.128190] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:37.432 [2024-11-20 05:38:57.147046] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:37.432 [2024-11-20 05:38:57.147158] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:27:37.432 [2024-11-20 05:38:57.342565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.691 [2024-11-20 05:38:57.396711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:37.691 [2024-11-20 05:38:57.400454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.691 [2024-11-20 05:38:57.460399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:37.691 [2024-11-20 05:38:57.481683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.691 Running I/O for 1 seconds... 00:27:37.691 [2024-11-20 05:38:57.548096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:37.691 [2024-11-20 05:38:57.554707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.691 Running I/O for 1 seconds... 00:27:37.950 [2024-11-20 05:38:57.613385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:37.950 Running I/O for 1 seconds... 00:27:37.950 Running I/O for 1 seconds... 00:27:38.891 6834.00 IOPS, 26.70 MiB/s 00:27:38.891 Latency(us) 00:27:38.891 [2024-11-20T05:38:58.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.891 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:27:38.891 Nvme1n1 : 1.02 6844.41 26.74 0.00 0.00 18458.10 8925.38 32206.26 00:27:38.891 [2024-11-20T05:38:58.810Z] =================================================================================================================== 00:27:38.891 [2024-11-20T05:38:58.810Z] Total : 6844.41 26.74 0.00 0.00 18458.10 8925.38 32206.26 00:27:38.891 9483.00 IOPS, 37.04 MiB/s 00:27:38.891 Latency(us) 00:27:38.891 [2024-11-20T05:38:58.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.891 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:27:38.891 Nvme1n1 : 1.01 9535.37 37.25 0.00 0.00 13362.20 6709.64 23218.47 00:27:38.891 [2024-11-20T05:38:58.810Z] =================================================================================================================== 00:27:38.891 [2024-11-20T05:38:58.810Z] Total : 9535.37 37.25 0.00 0.00 13362.20 6709.64 23218.47 00:27:38.891 203360.00 IOPS, 794.38 MiB/s 00:27:38.891 Latency(us) 00:27:38.891 [2024-11-20T05:38:58.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.891 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:27:38.891 Nvme1n1 : 1.00 202984.76 792.91 0.00 0.00 627.11 298.42 1833.45 00:27:38.891 [2024-11-20T05:38:58.810Z] =================================================================================================================== 00:27:38.891 [2024-11-20T05:38:58.810Z] Total : 202984.76 792.91 0.00 0.00 627.11 298.42 1833.45 00:27:38.891 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67833 00:27:38.891 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67835 00:27:38.891 7511.00 IOPS, 29.34 MiB/s 00:27:38.891 Latency(us) 00:27:38.891 [2024-11-20T05:38:58.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.891 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:27:38.891 Nvme1n1 : 1.01 7619.02 29.76 0.00 0.00 16756.97 3073.95 44439.65 00:27:38.891 [2024-11-20T05:38:58.810Z] =================================================================================================================== 00:27:38.891 [2024-11-20T05:38:58.810Z] Total : 7619.02 29.76 0.00 0.00 16756.97 3073.95 44439.65 00:27:39.156 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67837 00:27:39.156 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:39.156 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.156 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:39.156 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.156 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:27:39.156 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:27:39.156 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:39.156 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:27:39.156 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:39.156 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:27:39.156 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:39.157 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:39.157 rmmod nvme_tcp 00:27:39.157 rmmod nvme_fabrics 00:27:39.157 rmmod nvme_keyring 00:27:39.157 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:39.157 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:27:39.157 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:27:39.157 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 67796 ']' 00:27:39.157 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 67796 00:27:39.157 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 67796 ']' 00:27:39.157 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 67796 00:27:39.157 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:27:39.157 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:39.157 05:38:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67796 00:27:39.157 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:39.157 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:39.157 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67796' 00:27:39.157 killing process with pid 67796 00:27:39.157 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 67796 00:27:39.157 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 67796 00:27:39.417 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:39.417 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:39.417 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:39.417 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:27:39.417 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:39.417 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:27:39.417 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:27:39.418 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:39.418 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:39.418 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:39.418 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:39.418 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:39.418 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:39.418 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:39.418 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:39.418 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:39.418 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:39.418 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:39.418 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:39.676 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:39.676 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:39.676 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:39.676 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:39.676 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.676 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.676 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.676 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:27:39.676 00:27:39.676 real 0m3.647s 00:27:39.676 user 0m14.172s 00:27:39.676 sys 0m2.270s 00:27:39.676 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:39.676 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:39.676 ************************************ 00:27:39.676 END TEST nvmf_bdev_io_wait 00:27:39.676 ************************************ 00:27:39.676 05:38:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:27:39.676 05:38:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:39.676 05:38:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:39.676 05:38:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:27:39.676 ************************************ 00:27:39.676 START TEST nvmf_queue_depth 00:27:39.676 ************************************ 00:27:39.676 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:27:39.936 * Looking for test storage... 00:27:39.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:27:39.936 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.937 --rc genhtml_branch_coverage=1 00:27:39.937 --rc genhtml_function_coverage=1 00:27:39.937 --rc genhtml_legend=1 00:27:39.937 --rc geninfo_all_blocks=1 00:27:39.937 --rc geninfo_unexecuted_blocks=1 00:27:39.937 00:27:39.937 ' 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.937 --rc genhtml_branch_coverage=1 00:27:39.937 --rc genhtml_function_coverage=1 00:27:39.937 --rc genhtml_legend=1 00:27:39.937 --rc geninfo_all_blocks=1 00:27:39.937 --rc geninfo_unexecuted_blocks=1 00:27:39.937 00:27:39.937 ' 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.937 --rc genhtml_branch_coverage=1 00:27:39.937 --rc genhtml_function_coverage=1 00:27:39.937 --rc genhtml_legend=1 00:27:39.937 --rc geninfo_all_blocks=1 00:27:39.937 --rc geninfo_unexecuted_blocks=1 00:27:39.937 00:27:39.937 ' 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:39.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.937 --rc genhtml_branch_coverage=1 00:27:39.937 --rc genhtml_function_coverage=1 00:27:39.937 --rc genhtml_legend=1 00:27:39.937 --rc geninfo_all_blocks=1 00:27:39.937 --rc geninfo_unexecuted_blocks=1 00:27:39.937 00:27:39.937 ' 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:39.937 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:39.937 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:39.938 Cannot find device "nvmf_init_br" 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:39.938 Cannot find device "nvmf_init_br2" 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:39.938 Cannot find device "nvmf_tgt_br" 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:39.938 Cannot find device "nvmf_tgt_br2" 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:39.938 Cannot find device "nvmf_init_br" 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:39.938 Cannot find device "nvmf_init_br2" 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:27:39.938 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:40.197 Cannot find device "nvmf_tgt_br" 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:40.197 Cannot find device "nvmf_tgt_br2" 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:40.197 Cannot find device "nvmf_br" 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:40.197 Cannot find device "nvmf_init_if" 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:40.197 Cannot find device "nvmf_init_if2" 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:40.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:40.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:40.197 05:38:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:40.197 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:40.456 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:40.456 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:40.456 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:40.456 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:40.456 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:40.456 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:40.456 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:40.456 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:40.456 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:40.456 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:40.456 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:40.456 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:40.456 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:40.456 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:27:40.456 00:27:40.456 --- 10.0.0.3 ping statistics --- 00:27:40.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.456 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:40.457 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:40.457 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:27:40.457 00:27:40.457 --- 10.0.0.4 ping statistics --- 00:27:40.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.457 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:40.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:27:40.457 00:27:40.457 --- 10.0.0.1 ping statistics --- 00:27:40.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.457 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:40.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:27:40.457 00:27:40.457 --- 10.0.0.2 ping statistics --- 00:27:40.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.457 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=68102 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 68102 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 68102 ']' 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:40.457 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:40.457 [2024-11-20 05:39:00.311489] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:40.457 [2024-11-20 05:39:00.311568] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.716 [2024-11-20 05:39:00.468413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.716 [2024-11-20 05:39:00.531104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.716 [2024-11-20 05:39:00.531165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.716 [2024-11-20 05:39:00.531180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.716 [2024-11-20 05:39:00.531194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.716 [2024-11-20 05:39:00.531205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.716 [2024-11-20 05:39:00.531600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:40.975 [2024-11-20 05:39:00.703050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:40.975 Malloc0 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:40.975 [2024-11-20 05:39:00.754974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=68144 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 68144 /var/tmp/bdevperf.sock 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 68144 ']' 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:40.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:40.975 05:39:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:40.975 [2024-11-20 05:39:00.813940] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:40.975 [2024-11-20 05:39:00.814046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68144 ] 00:27:41.234 [2024-11-20 05:39:00.964257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.234 [2024-11-20 05:39:01.028537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.234 05:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:41.234 05:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:27:41.234 05:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:41.234 05:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.234 05:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:41.492 NVMe0n1 00:27:41.492 05:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.492 05:39:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:41.492 Running I/O for 10 seconds... 00:27:43.805 9216.00 IOPS, 36.00 MiB/s [2024-11-20T05:39:04.657Z] 9490.00 IOPS, 37.07 MiB/s [2024-11-20T05:39:05.683Z] 9605.67 IOPS, 37.52 MiB/s [2024-11-20T05:39:06.619Z] 9944.50 IOPS, 38.85 MiB/s [2024-11-20T05:39:07.556Z] 10089.40 IOPS, 39.41 MiB/s [2024-11-20T05:39:08.495Z] 10298.00 IOPS, 40.23 MiB/s [2024-11-20T05:39:09.432Z] 10470.57 IOPS, 40.90 MiB/s [2024-11-20T05:39:10.371Z] 10488.75 IOPS, 40.97 MiB/s [2024-11-20T05:39:11.749Z] 10462.00 IOPS, 40.87 MiB/s [2024-11-20T05:39:11.749Z] 10525.20 IOPS, 41.11 MiB/s 00:27:51.830 Latency(us) 00:27:51.830 [2024-11-20T05:39:11.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.830 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:27:51.830 Verification LBA range: start 0x0 length 0x4000 00:27:51.830 NVMe0n1 : 10.09 10527.43 41.12 0.00 0.00 96843.38 22594.32 82388.11 00:27:51.830 [2024-11-20T05:39:11.749Z] =================================================================================================================== 00:27:51.830 [2024-11-20T05:39:11.749Z] Total : 10527.43 41.12 0.00 0.00 96843.38 22594.32 82388.11 00:27:51.830 { 00:27:51.830 "results": [ 00:27:51.830 { 00:27:51.830 "job": "NVMe0n1", 00:27:51.830 "core_mask": "0x1", 00:27:51.830 "workload": "verify", 00:27:51.830 "status": "finished", 00:27:51.830 "verify_range": { 00:27:51.830 "start": 0, 00:27:51.830 "length": 16384 00:27:51.830 }, 00:27:51.830 "queue_depth": 1024, 00:27:51.830 "io_size": 4096, 00:27:51.830 "runtime": 10.090685, 00:27:51.830 "iops": 10527.431983061606, 00:27:51.830 "mibps": 41.1227811838344, 00:27:51.830 "io_failed": 0, 00:27:51.830 "io_timeout": 0, 00:27:51.830 "avg_latency_us": 96843.38448453454, 00:27:51.830 "min_latency_us": 22594.31619047619, 00:27:51.830 "max_latency_us": 82388.11428571428 00:27:51.830 } 00:27:51.830 ], 00:27:51.830 "core_count": 1 00:27:51.830 } 00:27:51.830 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 68144 00:27:51.830 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 68144 ']' 00:27:51.830 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 68144 00:27:51.830 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:27:51.830 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:51.830 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68144 00:27:51.830 killing process with pid 68144 00:27:51.830 Received shutdown signal, test time was about 10.000000 seconds 00:27:51.830 00:27:51.830 Latency(us) 00:27:51.830 [2024-11-20T05:39:11.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.830 [2024-11-20T05:39:11.749Z] =================================================================================================================== 00:27:51.830 [2024-11-20T05:39:11.749Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:51.830 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:51.830 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:51.830 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68144' 00:27:51.830 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 68144 00:27:51.830 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 68144 00:27:51.831 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:27:51.831 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:27:51.831 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:51.831 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:27:51.831 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:51.831 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:27:51.831 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:51.831 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:51.831 rmmod nvme_tcp 00:27:51.831 rmmod nvme_fabrics 00:27:51.831 rmmod nvme_keyring 00:27:51.831 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:51.831 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:27:51.831 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:27:51.831 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 68102 ']' 00:27:51.831 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 68102 00:27:51.831 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 68102 ']' 00:27:51.831 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 68102 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68102 00:27:52.089 killing process with pid 68102 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68102' 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 68102 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 68102 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:52.089 05:39:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:27:52.348 ************************************ 00:27:52.348 END TEST nvmf_queue_depth 00:27:52.348 ************************************ 00:27:52.348 00:27:52.348 real 0m12.744s 00:27:52.348 user 0m21.152s 00:27:52.348 sys 0m2.407s 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:52.348 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:52.606 05:39:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:27:52.606 05:39:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:52.606 05:39:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:52.606 05:39:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:27:52.606 ************************************ 00:27:52.606 START TEST nvmf_target_multipath 00:27:52.606 ************************************ 00:27:52.606 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:27:52.606 * Looking for test storage... 00:27:52.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:52.606 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:52.606 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:27:52.606 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:52.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.866 --rc genhtml_branch_coverage=1 00:27:52.866 --rc genhtml_function_coverage=1 00:27:52.866 --rc genhtml_legend=1 00:27:52.866 --rc geninfo_all_blocks=1 00:27:52.866 --rc geninfo_unexecuted_blocks=1 00:27:52.866 00:27:52.866 ' 00:27:52.866 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:52.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.866 --rc genhtml_branch_coverage=1 00:27:52.867 --rc genhtml_function_coverage=1 00:27:52.867 --rc genhtml_legend=1 00:27:52.867 --rc geninfo_all_blocks=1 00:27:52.867 --rc geninfo_unexecuted_blocks=1 00:27:52.867 00:27:52.867 ' 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:52.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.867 --rc genhtml_branch_coverage=1 00:27:52.867 --rc genhtml_function_coverage=1 00:27:52.867 --rc genhtml_legend=1 00:27:52.867 --rc geninfo_all_blocks=1 00:27:52.867 --rc geninfo_unexecuted_blocks=1 00:27:52.867 00:27:52.867 ' 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:52.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.867 --rc genhtml_branch_coverage=1 00:27:52.867 --rc genhtml_function_coverage=1 00:27:52.867 --rc genhtml_legend=1 00:27:52.867 --rc geninfo_all_blocks=1 00:27:52.867 --rc geninfo_unexecuted_blocks=1 00:27:52.867 00:27:52.867 ' 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:52.867 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:52.867 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:52.868 Cannot find device "nvmf_init_br" 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:52.868 Cannot find device "nvmf_init_br2" 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:52.868 Cannot find device "nvmf_tgt_br" 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:52.868 Cannot find device "nvmf_tgt_br2" 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:52.868 Cannot find device "nvmf_init_br" 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:52.868 Cannot find device "nvmf_init_br2" 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:52.868 Cannot find device "nvmf_tgt_br" 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:52.868 Cannot find device "nvmf_tgt_br2" 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:52.868 Cannot find device "nvmf_br" 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:52.868 Cannot find device "nvmf_init_if" 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:52.868 Cannot find device "nvmf_init_if2" 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:52.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:52.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:52.868 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:53.127 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:53.127 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:53.127 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:53.127 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:53.127 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:53.127 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:53.127 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:53.127 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:53.127 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:53.127 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:53.127 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:53.127 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:53.128 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:53.128 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:27:53.128 00:27:53.128 --- 10.0.0.3 ping statistics --- 00:27:53.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.128 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:53.128 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:53.128 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.089 ms 00:27:53.128 00:27:53.128 --- 10.0.0.4 ping statistics --- 00:27:53.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.128 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:27:53.128 05:39:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:53.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:27:53.128 00:27:53.128 --- 10.0.0.1 ping statistics --- 00:27:53.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.128 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:53.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:27:53.128 00:27:53.128 --- 10.0.0.2 ping statistics --- 00:27:53.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.128 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:53.128 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:53.386 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=68517 00:27:53.386 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 68517 00:27:53.386 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:53.386 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 68517 ']' 00:27:53.386 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.386 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:53.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.386 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.386 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:53.386 05:39:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:53.386 [2024-11-20 05:39:13.111968] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:53.386 [2024-11-20 05:39:13.112099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.386 [2024-11-20 05:39:13.273849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:53.645 [2024-11-20 05:39:13.339939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.645 [2024-11-20 05:39:13.340001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.645 [2024-11-20 05:39:13.340021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.645 [2024-11-20 05:39:13.340037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.645 [2024-11-20 05:39:13.340052] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.645 [2024-11-20 05:39:13.341197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.645 [2024-11-20 05:39:13.341344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:53.645 [2024-11-20 05:39:13.341300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.645 [2024-11-20 05:39:13.341348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.581 05:39:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:54.581 05:39:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:27:54.581 05:39:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:54.581 05:39:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:54.581 05:39:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:54.581 05:39:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.581 05:39:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:54.840 [2024-11-20 05:39:14.534756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.840 05:39:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:55.098 Malloc0 00:27:55.098 05:39:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:27:55.357 05:39:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:55.621 05:39:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:55.880 [2024-11-20 05:39:15.538874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:55.880 05:39:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:27:55.880 [2024-11-20 05:39:15.739050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:27:55.880 05:39:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:27:56.139 05:39:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:27:56.397 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:27:56.397 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:27:56.397 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:27:56.397 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:27:56.397 05:39:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68660 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:27:58.302 05:39:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:27:58.561 [global] 00:27:58.561 thread=1 00:27:58.561 invalidate=1 00:27:58.561 rw=randrw 00:27:58.561 time_based=1 00:27:58.561 runtime=6 00:27:58.561 ioengine=libaio 00:27:58.561 direct=1 00:27:58.561 bs=4096 00:27:58.561 iodepth=128 00:27:58.561 norandommap=0 00:27:58.561 numjobs=1 00:27:58.561 00:27:58.561 verify_dump=1 00:27:58.561 verify_backlog=512 00:27:58.561 verify_state_save=0 00:27:58.561 do_verify=1 00:27:58.561 verify=crc32c-intel 00:27:58.561 [job0] 00:27:58.561 filename=/dev/nvme0n1 00:27:58.561 Could not set queue depth (nvme0n1) 00:27:58.561 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:58.561 fio-3.35 00:27:58.561 Starting 1 thread 00:27:59.508 05:39:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:27:59.769 05:39:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:28:00.027 05:39:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:28:00.027 05:39:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:28:00.027 05:39:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:00.027 05:39:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:28:00.027 05:39:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:28:00.027 05:39:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:28:00.027 05:39:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:28:00.027 05:39:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:28:00.027 05:39:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:00.027 05:39:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:28:00.027 05:39:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:00.027 05:39:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:28:00.027 05:39:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:28:00.963 05:39:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:28:00.963 05:39:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:00.963 05:39:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:28:00.963 05:39:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:01.222 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:28:01.815 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:28:01.815 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:28:01.815 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:01.815 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:28:01.816 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:28:01.816 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:28:01.816 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:28:01.816 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:28:01.816 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:01.816 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:28:01.816 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:01.816 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:28:01.816 05:39:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:28:02.751 05:39:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:28:02.751 05:39:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:02.751 05:39:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:28:02.752 05:39:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68660 00:28:04.655 00:28:04.655 job0: (groupid=0, jobs=1): err= 0: pid=68681: Wed Nov 20 05:39:24 2024 00:28:04.655 read: IOPS=13.0k, BW=50.8MiB/s (53.3MB/s)(305MiB/6002msec) 00:28:04.655 slat (usec): min=6, max=4338, avg=43.63, stdev=193.49 00:28:04.655 clat (usec): min=611, max=52915, avg=6770.22, stdev=1697.01 00:28:04.655 lat (usec): min=629, max=52924, avg=6813.85, stdev=1701.87 00:28:04.655 clat percentiles (usec): 00:28:04.655 | 1.00th=[ 4047], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 5997], 00:28:04.655 | 30.00th=[ 6259], 40.00th=[ 6456], 50.00th=[ 6652], 60.00th=[ 6849], 00:28:04.655 | 70.00th=[ 7111], 80.00th=[ 7439], 90.00th=[ 7963], 95.00th=[ 8717], 00:28:04.655 | 99.00th=[10290], 99.50th=[10814], 99.90th=[12649], 99.95th=[48497], 00:28:04.655 | 99.99th=[52691] 00:28:04.655 bw ( KiB/s): min=10488, max=33424, per=53.01%, avg=27574.18, stdev=7848.25, samples=11 00:28:04.655 iops : min= 2622, max= 8356, avg=6893.55, stdev=1962.06, samples=11 00:28:04.655 write: IOPS=7662, BW=29.9MiB/s (31.4MB/s)(155MiB/5177msec); 0 zone resets 00:28:04.655 slat (usec): min=15, max=2951, avg=54.65, stdev=129.16 00:28:04.655 clat (usec): min=300, max=51078, avg=5849.68, stdev=1822.22 00:28:04.655 lat (usec): min=341, max=51106, avg=5904.32, stdev=1825.08 00:28:04.655 clat percentiles (usec): 00:28:04.655 | 1.00th=[ 3261], 5.00th=[ 4178], 10.00th=[ 4817], 20.00th=[ 5211], 00:28:04.655 | 30.00th=[ 5473], 40.00th=[ 5604], 50.00th=[ 5800], 60.00th=[ 5932], 00:28:04.655 | 70.00th=[ 6128], 80.00th=[ 6390], 90.00th=[ 6718], 95.00th=[ 7177], 00:28:04.655 | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[46924], 99.95th=[49546], 00:28:04.655 | 99.99th=[50594] 00:28:04.655 bw ( KiB/s): min=11248, max=34352, per=89.88%, avg=27547.36, stdev=7598.34, samples=11 00:28:04.655 iops : min= 2812, max= 8588, avg=6886.82, stdev=1899.58, samples=11 00:28:04.655 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:28:04.655 lat (msec) : 2=0.04%, 4=1.88%, 10=97.00%, 20=0.95%, 50=0.07% 00:28:04.655 lat (msec) : 100=0.04% 00:28:04.655 cpu : usr=5.38%, sys=24.44%, ctx=7871, majf=0, minf=108 00:28:04.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:28:04.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:04.655 issued rwts: total=78047,39668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.655 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:04.655 00:28:04.655 Run status group 0 (all jobs): 00:28:04.655 READ: bw=50.8MiB/s (53.3MB/s), 50.8MiB/s-50.8MiB/s (53.3MB/s-53.3MB/s), io=305MiB (320MB), run=6002-6002msec 00:28:04.655 WRITE: bw=29.9MiB/s (31.4MB/s), 29.9MiB/s-29.9MiB/s (31.4MB/s-31.4MB/s), io=155MiB (162MB), run=5177-5177msec 00:28:04.655 00:28:04.655 Disk stats (read/write): 00:28:04.655 nvme0n1: ios=77198/38747, merge=0/0, ticks=485347/208383, in_queue=693730, util=98.51% 00:28:04.656 05:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:28:05.224 05:39:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:28:05.224 05:39:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:28:05.224 05:39:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:28:05.224 05:39:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:05.224 05:39:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:28:05.224 05:39:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:28:05.224 05:39:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:28:05.224 05:39:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:28:05.224 05:39:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:28:05.224 05:39:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:05.224 05:39:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:28:05.224 05:39:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:05.224 05:39:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:28:05.224 05:39:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:28:06.161 05:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:28:06.161 05:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:06.161 05:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:28:06.161 05:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:28:06.161 05:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68810 00:28:06.161 05:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:28:06.161 05:39:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:28:06.420 [global] 00:28:06.420 thread=1 00:28:06.420 invalidate=1 00:28:06.420 rw=randrw 00:28:06.420 time_based=1 00:28:06.420 runtime=6 00:28:06.420 ioengine=libaio 00:28:06.420 direct=1 00:28:06.420 bs=4096 00:28:06.420 iodepth=128 00:28:06.420 norandommap=0 00:28:06.420 numjobs=1 00:28:06.420 00:28:06.420 verify_dump=1 00:28:06.420 verify_backlog=512 00:28:06.420 verify_state_save=0 00:28:06.420 do_verify=1 00:28:06.420 verify=crc32c-intel 00:28:06.420 [job0] 00:28:06.420 filename=/dev/nvme0n1 00:28:06.420 Could not set queue depth (nvme0n1) 00:28:06.420 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:06.420 fio-3.35 00:28:06.420 Starting 1 thread 00:28:07.356 05:39:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:28:07.615 05:39:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:28:07.898 05:39:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:28:07.898 05:39:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:28:07.898 05:39:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:07.898 05:39:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:28:07.898 05:39:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:28:07.898 05:39:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:28:07.898 05:39:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:28:07.898 05:39:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:28:07.898 05:39:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:07.898 05:39:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:28:07.898 05:39:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:07.898 05:39:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:28:07.898 05:39:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:28:08.833 05:39:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:28:08.833 05:39:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:08.833 05:39:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:28:08.833 05:39:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:09.092 05:39:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:28:09.350 05:39:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:28:09.350 05:39:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:28:09.350 05:39:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:09.350 05:39:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:28:09.350 05:39:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:28:09.350 05:39:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:28:09.350 05:39:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:28:09.350 05:39:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:28:09.350 05:39:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:28:09.350 05:39:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:28:09.350 05:39:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:09.350 05:39:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:28:09.350 05:39:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:28:10.726 05:39:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:28:10.726 05:39:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:28:10.726 05:39:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:28:10.726 05:39:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68810 00:28:12.628 00:28:12.628 job0: (groupid=0, jobs=1): err= 0: pid=68831: Wed Nov 20 05:39:32 2024 00:28:12.628 read: IOPS=14.5k, BW=56.7MiB/s (59.5MB/s)(341MiB/6005msec) 00:28:12.628 slat (usec): min=4, max=4335, avg=35.76, stdev=166.84 00:28:12.628 clat (usec): min=190, max=13214, avg=6145.34, stdev=1228.20 00:28:12.628 lat (usec): min=214, max=13224, avg=6181.11, stdev=1241.01 00:28:12.628 clat percentiles (usec): 00:28:12.628 | 1.00th=[ 2638], 5.00th=[ 3949], 10.00th=[ 4555], 20.00th=[ 5473], 00:28:12.628 | 30.00th=[ 5800], 40.00th=[ 5997], 50.00th=[ 6194], 60.00th=[ 6456], 00:28:12.628 | 70.00th=[ 6652], 80.00th=[ 6980], 90.00th=[ 7439], 95.00th=[ 7963], 00:28:12.628 | 99.00th=[ 9503], 99.50th=[10028], 99.90th=[11076], 99.95th=[11600], 00:28:12.628 | 99.99th=[12649] 00:28:12.628 bw ( KiB/s): min= 3664, max=47272, per=50.73%, avg=29476.36, stdev=13493.05, samples=11 00:28:12.628 iops : min= 916, max=11818, avg=7369.09, stdev=3373.26, samples=11 00:28:12.628 write: IOPS=9127, BW=35.7MiB/s (37.4MB/s)(173MiB/4858msec); 0 zone resets 00:28:12.628 slat (usec): min=11, max=3100, avg=44.76, stdev=106.19 00:28:12.628 clat (usec): min=317, max=11002, avg=5094.69, stdev=1196.20 00:28:12.628 lat (usec): min=351, max=11023, avg=5139.45, stdev=1205.52 00:28:12.628 clat percentiles (usec): 00:28:12.628 | 1.00th=[ 2311], 5.00th=[ 2933], 10.00th=[ 3326], 20.00th=[ 3982], 00:28:12.628 | 30.00th=[ 4621], 40.00th=[ 5080], 50.00th=[ 5342], 60.00th=[ 5538], 00:28:12.628 | 70.00th=[ 5800], 80.00th=[ 5997], 90.00th=[ 6325], 95.00th=[ 6587], 00:28:12.628 | 99.00th=[ 8094], 99.50th=[ 8848], 99.90th=[10159], 99.95th=[10421], 00:28:12.628 | 99.99th=[10683] 00:28:12.628 bw ( KiB/s): min= 3816, max=46968, per=80.81%, avg=29504.00, stdev=13277.49, samples=11 00:28:12.628 iops : min= 954, max=11742, avg=7376.00, stdev=3319.37, samples=11 00:28:12.628 lat (usec) : 250=0.01%, 500=0.02%, 750=0.06%, 1000=0.04% 00:28:12.628 lat (msec) : 2=0.30%, 4=10.08%, 10=89.12%, 20=0.38% 00:28:12.628 cpu : usr=5.63%, sys=25.90%, ctx=9317, majf=0, minf=163 00:28:12.628 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:28:12.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:12.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:12.629 issued rwts: total=87220,44341,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:12.629 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:12.629 00:28:12.629 Run status group 0 (all jobs): 00:28:12.629 READ: bw=56.7MiB/s (59.5MB/s), 56.7MiB/s-56.7MiB/s (59.5MB/s-59.5MB/s), io=341MiB (357MB), run=6005-6005msec 00:28:12.629 WRITE: bw=35.7MiB/s (37.4MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.4MB/s), io=173MiB (182MB), run=4858-4858msec 00:28:12.629 00:28:12.629 Disk stats (read/write): 00:28:12.629 nvme0n1: ios=86200/43534, merge=0/0, ticks=492596/202780, in_queue=695376, util=98.66% 00:28:12.629 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:12.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:28:12.629 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:12.629 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:28:12.629 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:12.629 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:12.629 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:12.629 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:12.629 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:28:12.629 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:12.887 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:13.147 rmmod nvme_tcp 00:28:13.147 rmmod nvme_fabrics 00:28:13.147 rmmod nvme_keyring 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 68517 ']' 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 68517 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 68517 ']' 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 68517 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68517 00:28:13.147 killing process with pid 68517 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68517' 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 68517 00:28:13.147 05:39:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 68517 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.406 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:28:13.664 00:28:13.664 real 0m21.050s 00:28:13.664 user 1m20.469s 00:28:13.664 sys 0m7.895s 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:13.664 ************************************ 00:28:13.664 END TEST nvmf_target_multipath 00:28:13.664 ************************************ 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:28:13.664 ************************************ 00:28:13.664 START TEST nvmf_zcopy 00:28:13.664 ************************************ 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:28:13.664 * Looking for test storage... 00:28:13.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:28:13.664 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:13.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.921 --rc genhtml_branch_coverage=1 00:28:13.921 --rc genhtml_function_coverage=1 00:28:13.921 --rc genhtml_legend=1 00:28:13.921 --rc geninfo_all_blocks=1 00:28:13.921 --rc geninfo_unexecuted_blocks=1 00:28:13.921 00:28:13.921 ' 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:13.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.921 --rc genhtml_branch_coverage=1 00:28:13.921 --rc genhtml_function_coverage=1 00:28:13.921 --rc genhtml_legend=1 00:28:13.921 --rc geninfo_all_blocks=1 00:28:13.921 --rc geninfo_unexecuted_blocks=1 00:28:13.921 00:28:13.921 ' 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:13.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.921 --rc genhtml_branch_coverage=1 00:28:13.921 --rc genhtml_function_coverage=1 00:28:13.921 --rc genhtml_legend=1 00:28:13.921 --rc geninfo_all_blocks=1 00:28:13.921 --rc geninfo_unexecuted_blocks=1 00:28:13.921 00:28:13.921 ' 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:13.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.921 --rc genhtml_branch_coverage=1 00:28:13.921 --rc genhtml_function_coverage=1 00:28:13.921 --rc genhtml_legend=1 00:28:13.921 --rc geninfo_all_blocks=1 00:28:13.921 --rc geninfo_unexecuted_blocks=1 00:28:13.921 00:28:13.921 ' 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:13.921 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:13.921 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:13.922 Cannot find device "nvmf_init_br" 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:13.922 Cannot find device "nvmf_init_br2" 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:13.922 Cannot find device "nvmf_tgt_br" 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:13.922 Cannot find device "nvmf_tgt_br2" 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:13.922 Cannot find device "nvmf_init_br" 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:13.922 Cannot find device "nvmf_init_br2" 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:13.922 Cannot find device "nvmf_tgt_br" 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:13.922 Cannot find device "nvmf_tgt_br2" 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:13.922 Cannot find device "nvmf_br" 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:13.922 Cannot find device "nvmf_init_if" 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:13.922 Cannot find device "nvmf_init_if2" 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:13.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:13.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:13.922 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:14.180 05:39:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:14.180 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:14.180 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:28:14.180 00:28:14.180 --- 10.0.0.3 ping statistics --- 00:28:14.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.180 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:14.180 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:14.180 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:28:14.180 00:28:14.180 --- 10.0.0.4 ping statistics --- 00:28:14.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.180 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:14.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:28:14.180 00:28:14.180 --- 10.0.0.1 ping statistics --- 00:28:14.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.180 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:14.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:28:14.180 00:28:14.180 --- 10.0.0.2 ping statistics --- 00:28:14.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.180 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:14.180 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:14.438 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:28:14.438 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:14.438 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:14.438 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:14.438 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=69170 00:28:14.438 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:14.438 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 69170 00:28:14.438 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 69170 ']' 00:28:14.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.438 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.438 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:14.438 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.438 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:14.438 05:39:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:14.438 [2024-11-20 05:39:34.179784] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:14.438 [2024-11-20 05:39:34.180131] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.438 [2024-11-20 05:39:34.331070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.696 [2024-11-20 05:39:34.378955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.696 [2024-11-20 05:39:34.379208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.696 [2024-11-20 05:39:34.379224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.696 [2024-11-20 05:39:34.379232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.696 [2024-11-20 05:39:34.379240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.696 [2024-11-20 05:39:34.379551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:15.262 [2024-11-20 05:39:35.105938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:15.262 [2024-11-20 05:39:35.122055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:15.262 malloc0 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:15.262 { 00:28:15.262 "params": { 00:28:15.262 "name": "Nvme$subsystem", 00:28:15.262 "trtype": "$TEST_TRANSPORT", 00:28:15.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.262 "adrfam": "ipv4", 00:28:15.262 "trsvcid": "$NVMF_PORT", 00:28:15.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.262 "hdgst": ${hdgst:-false}, 00:28:15.262 "ddgst": ${ddgst:-false} 00:28:15.262 }, 00:28:15.262 "method": "bdev_nvme_attach_controller" 00:28:15.262 } 00:28:15.262 EOF 00:28:15.262 )") 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:28:15.262 05:39:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:15.262 "params": { 00:28:15.262 "name": "Nvme1", 00:28:15.262 "trtype": "tcp", 00:28:15.262 "traddr": "10.0.0.3", 00:28:15.262 "adrfam": "ipv4", 00:28:15.262 "trsvcid": "4420", 00:28:15.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:15.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:15.262 "hdgst": false, 00:28:15.262 "ddgst": false 00:28:15.262 }, 00:28:15.262 "method": "bdev_nvme_attach_controller" 00:28:15.262 }' 00:28:15.520 [2024-11-20 05:39:35.216322] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:15.520 [2024-11-20 05:39:35.216417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69221 ] 00:28:15.520 [2024-11-20 05:39:35.376021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.520 [2024-11-20 05:39:35.437305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.779 Running I/O for 10 seconds... 00:28:18.124 7333.00 IOPS, 57.29 MiB/s [2024-11-20T05:39:38.977Z] 7573.50 IOPS, 59.17 MiB/s [2024-11-20T05:39:39.910Z] 7681.33 IOPS, 60.01 MiB/s [2024-11-20T05:39:40.843Z] 7755.50 IOPS, 60.59 MiB/s [2024-11-20T05:39:41.780Z] 7794.40 IOPS, 60.89 MiB/s [2024-11-20T05:39:42.717Z] 7835.67 IOPS, 61.22 MiB/s [2024-11-20T05:39:43.656Z] 7847.86 IOPS, 61.31 MiB/s [2024-11-20T05:39:44.686Z] 7867.12 IOPS, 61.46 MiB/s [2024-11-20T05:39:45.623Z] 7870.00 IOPS, 61.48 MiB/s [2024-11-20T05:39:45.623Z] 7882.50 IOPS, 61.58 MiB/s 00:28:25.704 Latency(us) 00:28:25.704 [2024-11-20T05:39:45.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.704 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:28:25.704 Verification LBA range: start 0x0 length 0x1000 00:28:25.704 Nvme1n1 : 10.01 7887.51 61.62 0.00 0.00 16182.91 546.13 24092.28 00:28:25.704 [2024-11-20T05:39:45.623Z] =================================================================================================================== 00:28:25.704 [2024-11-20T05:39:45.623Z] Total : 7887.51 61.62 0.00 0.00 16182.91 546.13 24092.28 00:28:25.964 05:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=69343 00:28:25.964 05:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:28:25.964 05:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:25.964 05:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:28:25.964 05:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:28:25.964 05:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:28:25.964 05:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:28:25.964 05:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:25.964 05:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:25.964 { 00:28:25.964 "params": { 00:28:25.964 "name": "Nvme$subsystem", 00:28:25.964 "trtype": "$TEST_TRANSPORT", 00:28:25.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:25.964 "adrfam": "ipv4", 00:28:25.964 "trsvcid": "$NVMF_PORT", 00:28:25.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:25.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:25.964 "hdgst": ${hdgst:-false}, 00:28:25.964 "ddgst": ${ddgst:-false} 00:28:25.964 }, 00:28:25.964 "method": "bdev_nvme_attach_controller" 00:28:25.964 } 00:28:25.964 EOF 00:28:25.964 )") 00:28:25.964 05:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:28:25.964 [2024-11-20 05:39:45.803060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:25.964 [2024-11-20 05:39:45.803230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:25.964 05:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:28:25.964 05:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:28:25.964 05:39:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:25.964 "params": { 00:28:25.964 "name": "Nvme1", 00:28:25.964 "trtype": "tcp", 00:28:25.964 "traddr": "10.0.0.3", 00:28:25.964 "adrfam": "ipv4", 00:28:25.964 "trsvcid": "4420", 00:28:25.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:25.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:25.964 "hdgst": false, 00:28:25.964 "ddgst": false 00:28:25.964 }, 00:28:25.964 "method": "bdev_nvme_attach_controller" 00:28:25.964 }' 00:28:25.964 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:25.964 [2024-11-20 05:39:45.815042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:25.964 [2024-11-20 05:39:45.815069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:25.964 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:25.964 [2024-11-20 05:39:45.827027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:25.964 [2024-11-20 05:39:45.827049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:25.964 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:25.964 [2024-11-20 05:39:45.839030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:25.964 [2024-11-20 05:39:45.839053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:25.964 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:25.964 [2024-11-20 05:39:45.851024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:25.964 [2024-11-20 05:39:45.851047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:25.964 [2024-11-20 05:39:45.852310] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:25.964 [2024-11-20 05:39:45.852413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69343 ] 00:28:25.964 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:25.964 [2024-11-20 05:39:45.863035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:25.964 [2024-11-20 05:39:45.863167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:25.964 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:25.964 [2024-11-20 05:39:45.875037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:25.964 [2024-11-20 05:39:45.875161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:25.964 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.224 [2024-11-20 05:39:45.887037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.224 [2024-11-20 05:39:45.887144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.224 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.224 [2024-11-20 05:39:45.899038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.224 [2024-11-20 05:39:45.899060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.224 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.224 [2024-11-20 05:39:45.911033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.224 [2024-11-20 05:39:45.911053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.224 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.224 [2024-11-20 05:39:45.923037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.224 [2024-11-20 05:39:45.923056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.224 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.224 [2024-11-20 05:39:45.935042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.224 [2024-11-20 05:39:45.935062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.224 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.224 [2024-11-20 05:39:45.947045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.224 [2024-11-20 05:39:45.947065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.224 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.224 [2024-11-20 05:39:45.959046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.224 [2024-11-20 05:39:45.959065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.224 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.224 [2024-11-20 05:39:45.971049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.224 [2024-11-20 05:39:45.971067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.224 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.224 [2024-11-20 05:39:45.983051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.224 [2024-11-20 05:39:45.983071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.224 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.224 [2024-11-20 05:39:45.995051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.224 [2024-11-20 05:39:45.995067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.224 [2024-11-20 05:39:45.999129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.224 2024/11/20 05:39:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.224 [2024-11-20 05:39:46.007057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.224 [2024-11-20 05:39:46.007082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.224 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.224 [2024-11-20 05:39:46.019061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.224 [2024-11-20 05:39:46.019084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.224 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.224 [2024-11-20 05:39:46.031062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.224 [2024-11-20 05:39:46.031082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.224 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.224 [2024-11-20 05:39:46.043065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.224 [2024-11-20 05:39:46.043084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.224 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.224 [2024-11-20 05:39:46.049530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.224 [2024-11-20 05:39:46.055069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.224 [2024-11-20 05:39:46.055089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.225 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.225 [2024-11-20 05:39:46.067083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.225 [2024-11-20 05:39:46.067110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.225 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.225 [2024-11-20 05:39:46.079098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.225 [2024-11-20 05:39:46.079123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.225 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.225 [2024-11-20 05:39:46.091082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.225 [2024-11-20 05:39:46.091104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.225 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.225 [2024-11-20 05:39:46.103085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.225 [2024-11-20 05:39:46.103113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.225 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.225 [2024-11-20 05:39:46.115090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.225 [2024-11-20 05:39:46.115113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.225 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.225 [2024-11-20 05:39:46.127091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.225 [2024-11-20 05:39:46.127112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.225 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.225 [2024-11-20 05:39:46.139092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.225 [2024-11-20 05:39:46.139111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.151118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.151149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.163110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.163135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.175111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.175134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.187114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.187140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.199113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.199138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.211137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.211166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 Running I/O for 5 seconds... 00:28:26.485 [2024-11-20 05:39:46.223119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.223139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.238997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.239031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.254057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.254090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.268937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.268969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.283601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.283633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.299205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.299237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.314297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.314328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.330466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.330494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.341724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.341756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.356979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.357011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.373034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.373067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.384367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.384401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.485 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.485 [2024-11-20 05:39:46.399723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.485 [2024-11-20 05:39:46.399757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.745 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.745 [2024-11-20 05:39:46.414852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.745 [2024-11-20 05:39:46.414884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.745 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.745 [2024-11-20 05:39:46.430210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.745 [2024-11-20 05:39:46.430244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.745 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.745 [2024-11-20 05:39:46.446047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.745 [2024-11-20 05:39:46.446081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.745 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.745 [2024-11-20 05:39:46.457351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.745 [2024-11-20 05:39:46.457387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.745 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.745 [2024-11-20 05:39:46.472840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.745 [2024-11-20 05:39:46.472880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.745 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.745 [2024-11-20 05:39:46.489579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.745 [2024-11-20 05:39:46.489615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.745 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.745 [2024-11-20 05:39:46.505926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.745 [2024-11-20 05:39:46.505960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.745 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.745 [2024-11-20 05:39:46.517008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.745 [2024-11-20 05:39:46.517042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.745 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.745 [2024-11-20 05:39:46.532348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.745 [2024-11-20 05:39:46.532380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.746 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.746 [2024-11-20 05:39:46.548240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.746 [2024-11-20 05:39:46.548272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.746 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.746 [2024-11-20 05:39:46.559184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.746 [2024-11-20 05:39:46.559215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.746 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.746 [2024-11-20 05:39:46.574458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.746 [2024-11-20 05:39:46.574489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.746 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.746 [2024-11-20 05:39:46.590073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.746 [2024-11-20 05:39:46.590107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.746 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.746 [2024-11-20 05:39:46.604390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.746 [2024-11-20 05:39:46.604424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.746 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.746 [2024-11-20 05:39:46.618828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.746 [2024-11-20 05:39:46.618860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.746 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.746 [2024-11-20 05:39:46.629641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.746 [2024-11-20 05:39:46.629673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.746 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.746 [2024-11-20 05:39:46.644695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.746 [2024-11-20 05:39:46.644728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.746 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:26.746 [2024-11-20 05:39:46.656057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:26.746 [2024-11-20 05:39:46.656093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:26.746 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.005 [2024-11-20 05:39:46.671850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.671883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.688050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.688085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.699381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.699413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.714663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.714696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.730565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.730597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.745289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.745322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.756566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.756596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.772642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.772673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.788243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.788353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.800171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.800276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.816735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.817067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.833500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.833679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.854360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.854493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.870767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.870904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.885054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.885180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.900546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.900658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.006 [2024-11-20 05:39:46.915835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.006 [2024-11-20 05:39:46.915943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.006 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.266 [2024-11-20 05:39:46.930771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.266 [2024-11-20 05:39:46.930873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.266 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.266 [2024-11-20 05:39:46.942330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.266 [2024-11-20 05:39:46.942430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.266 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.266 [2024-11-20 05:39:46.958086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.266 [2024-11-20 05:39:46.958184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.266 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.266 [2024-11-20 05:39:46.974665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.266 [2024-11-20 05:39:46.974762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.266 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.266 [2024-11-20 05:39:46.986314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.266 [2024-11-20 05:39:46.986410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.266 2024/11/20 05:39:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.266 [2024-11-20 05:39:47.001563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.266 [2024-11-20 05:39:47.001680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.266 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.266 [2024-11-20 05:39:47.017488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.266 [2024-11-20 05:39:47.017598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.266 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.267 [2024-11-20 05:39:47.031136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.267 [2024-11-20 05:39:47.031239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.267 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.267 [2024-11-20 05:39:47.046085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.267 [2024-11-20 05:39:47.046194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.267 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.267 [2024-11-20 05:39:47.061770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.267 [2024-11-20 05:39:47.061886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.267 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.267 [2024-11-20 05:39:47.077397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.267 [2024-11-20 05:39:47.077529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.267 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.267 [2024-11-20 05:39:47.093680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.267 [2024-11-20 05:39:47.093827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.267 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.267 [2024-11-20 05:39:47.109632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.267 [2024-11-20 05:39:47.109758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.267 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.267 [2024-11-20 05:39:47.124183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.267 [2024-11-20 05:39:47.124297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.267 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.267 [2024-11-20 05:39:47.138920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.267 [2024-11-20 05:39:47.139024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.267 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.267 [2024-11-20 05:39:47.155148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.267 [2024-11-20 05:39:47.155254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.267 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.267 [2024-11-20 05:39:47.170515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.267 [2024-11-20 05:39:47.170611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.267 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.527 [2024-11-20 05:39:47.185914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.527 [2024-11-20 05:39:47.186012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.527 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.527 [2024-11-20 05:39:47.202240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.527 [2024-11-20 05:39:47.202334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.527 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.527 [2024-11-20 05:39:47.217599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.527 [2024-11-20 05:39:47.217717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.527 15048.00 IOPS, 117.56 MiB/s [2024-11-20T05:39:47.446Z] 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.527 [2024-11-20 05:39:47.233620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.527 [2024-11-20 05:39:47.233730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.527 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.527 [2024-11-20 05:39:47.245209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.527 [2024-11-20 05:39:47.245300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.527 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.527 [2024-11-20 05:39:47.260752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.527 [2024-11-20 05:39:47.260868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.527 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.527 [2024-11-20 05:39:47.276793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.527 [2024-11-20 05:39:47.276896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.527 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.527 [2024-11-20 05:39:47.291469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.527 [2024-11-20 05:39:47.291621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.527 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.527 [2024-11-20 05:39:47.306374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.527 [2024-11-20 05:39:47.306510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.527 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.527 [2024-11-20 05:39:47.317963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.527 [2024-11-20 05:39:47.318068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.527 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.527 [2024-11-20 05:39:47.333414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.527 [2024-11-20 05:39:47.333556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.527 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.527 [2024-11-20 05:39:47.344737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.527 [2024-11-20 05:39:47.344873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.527 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.527 [2024-11-20 05:39:47.360733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.527 [2024-11-20 05:39:47.360839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.527 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.527 [2024-11-20 05:39:47.376124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.527 [2024-11-20 05:39:47.376209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.527 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.527 [2024-11-20 05:39:47.391604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.527 [2024-11-20 05:39:47.391716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.527 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.528 [2024-11-20 05:39:47.408844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.528 [2024-11-20 05:39:47.408941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.528 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.528 [2024-11-20 05:39:47.424804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.528 [2024-11-20 05:39:47.424926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.528 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.528 [2024-11-20 05:39:47.436849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.528 [2024-11-20 05:39:47.436953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.528 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.452228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.452342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.468427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.468532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.483469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.483563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.494764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.494863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.503847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.503942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.519389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.519420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.535109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.535137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.549549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.549578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.561332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.561360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.576841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.576867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.593314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.593341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.609289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.609316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.624058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.624084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.638293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.638320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.653087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.653115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.664388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.788 [2024-11-20 05:39:47.664414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.788 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.788 [2024-11-20 05:39:47.679563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.789 [2024-11-20 05:39:47.679588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.789 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:27.789 [2024-11-20 05:39:47.695143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:27.789 [2024-11-20 05:39:47.695168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:27.789 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.709856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.709883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.723847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.723873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.739048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.739075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.754674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.754703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.769332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.769360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.781112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.781140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.797170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.797201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.812975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.813009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.829594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.829624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.845537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.845568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.856994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.857023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.872405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.872435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.889755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.889785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.906135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.906162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.921348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.921374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.935861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.935886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.947000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.947029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.048 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.048 [2024-11-20 05:39:47.962369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.048 [2024-11-20 05:39:47.962396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:47.978398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:47.978426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:47.989656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:47.989683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:48.005134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.005166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:48.020005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.020035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:48.033912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.034055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:48.050194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.050228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:48.064949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.064980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:48.079282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.079312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:48.093917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.094051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:48.104850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.104880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:48.119965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.119994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:48.133523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.133553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:48.149516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.149549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:48.164982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.165013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:48.178718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.178747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:48.193889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.193920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 [2024-11-20 05:39:48.210009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.210041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.308 15151.50 IOPS, 118.37 MiB/s [2024-11-20T05:39:48.227Z] [2024-11-20 05:39:48.221532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.308 [2024-11-20 05:39:48.221562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.308 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.568 [2024-11-20 05:39:48.236360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.568 [2024-11-20 05:39:48.236391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.568 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.568 [2024-11-20 05:39:48.247215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.568 [2024-11-20 05:39:48.247244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.568 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.568 [2024-11-20 05:39:48.262785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.568 [2024-11-20 05:39:48.262814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.568 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.568 [2024-11-20 05:39:48.278335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.568 [2024-11-20 05:39:48.278364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.568 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.568 [2024-11-20 05:39:48.292762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.568 [2024-11-20 05:39:48.292792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.568 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.568 [2024-11-20 05:39:48.307348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.568 [2024-11-20 05:39:48.307376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.568 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.568 [2024-11-20 05:39:48.322996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.568 [2024-11-20 05:39:48.323026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.568 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.568 [2024-11-20 05:39:48.338030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.568 [2024-11-20 05:39:48.338059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.568 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.568 [2024-11-20 05:39:48.354347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.568 [2024-11-20 05:39:48.354379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.568 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.568 [2024-11-20 05:39:48.370018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.568 [2024-11-20 05:39:48.370049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.568 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.568 [2024-11-20 05:39:48.385133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.568 [2024-11-20 05:39:48.385163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.568 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.568 [2024-11-20 05:39:48.401095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.568 [2024-11-20 05:39:48.401125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.568 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.568 [2024-11-20 05:39:48.416055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.568 [2024-11-20 05:39:48.416087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.568 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.568 [2024-11-20 05:39:48.431787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.568 [2024-11-20 05:39:48.431820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.568 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.568 [2024-11-20 05:39:48.445646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.568 [2024-11-20 05:39:48.445676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.568 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.569 [2024-11-20 05:39:48.461113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.569 [2024-11-20 05:39:48.461144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.569 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.569 [2024-11-20 05:39:48.476875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.569 [2024-11-20 05:39:48.476909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.569 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.829 [2024-11-20 05:39:48.492096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.829 [2024-11-20 05:39:48.492128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.829 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.829 [2024-11-20 05:39:48.507956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.829 [2024-11-20 05:39:48.507989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.829 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.829 [2024-11-20 05:39:48.522080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.829 [2024-11-20 05:39:48.522110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.829 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.829 [2024-11-20 05:39:48.537494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.829 [2024-11-20 05:39:48.537526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.829 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.829 [2024-11-20 05:39:48.553736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.829 [2024-11-20 05:39:48.553767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.829 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.830 [2024-11-20 05:39:48.565359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.830 [2024-11-20 05:39:48.565391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.830 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.830 [2024-11-20 05:39:48.582896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.830 [2024-11-20 05:39:48.583006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.830 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.830 [2024-11-20 05:39:48.601610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.830 [2024-11-20 05:39:48.601708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.830 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.830 [2024-11-20 05:39:48.616533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.830 [2024-11-20 05:39:48.616633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.830 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.830 [2024-11-20 05:39:48.632348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.830 [2024-11-20 05:39:48.632504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.830 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.830 [2024-11-20 05:39:48.646269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.830 [2024-11-20 05:39:48.646439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.830 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.830 [2024-11-20 05:39:48.661707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.830 [2024-11-20 05:39:48.661740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.830 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.830 [2024-11-20 05:39:48.677413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.830 [2024-11-20 05:39:48.677456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.830 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.830 [2024-11-20 05:39:48.691744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.830 [2024-11-20 05:39:48.691775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.830 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.830 [2024-11-20 05:39:48.706581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.830 [2024-11-20 05:39:48.706613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.830 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.830 [2024-11-20 05:39:48.718142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.830 [2024-11-20 05:39:48.718175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.830 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:28.830 [2024-11-20 05:39:48.733638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.830 [2024-11-20 05:39:48.733669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.830 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.748684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.748715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.763660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.763690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.778098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.778129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.792687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.792717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.808468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.808499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.824011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.824046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.840272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.840311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.856152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.856188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.871927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.871961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.883267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.883299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.899684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.899719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.915379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.915412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.926706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.926735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.942377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.942410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.958635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.958665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.970343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.970374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:48.986007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:48.986036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.090 [2024-11-20 05:39:49.001695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.090 [2024-11-20 05:39:49.001724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.090 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.349 [2024-11-20 05:39:49.015512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.349 [2024-11-20 05:39:49.015540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.349 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.349 [2024-11-20 05:39:49.030565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.349 [2024-11-20 05:39:49.030593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.349 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.349 [2024-11-20 05:39:49.042043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.349 [2024-11-20 05:39:49.042073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.349 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.349 [2024-11-20 05:39:49.057050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.350 [2024-11-20 05:39:49.057081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.350 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.350 [2024-11-20 05:39:49.068202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.350 [2024-11-20 05:39:49.068235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.350 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.350 [2024-11-20 05:39:49.083959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.350 [2024-11-20 05:39:49.083991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.350 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.350 [2024-11-20 05:39:49.100050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.350 [2024-11-20 05:39:49.100079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.350 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.350 [2024-11-20 05:39:49.114772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.350 [2024-11-20 05:39:49.114799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.350 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.350 [2024-11-20 05:39:49.129410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.350 [2024-11-20 05:39:49.129439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.350 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.350 [2024-11-20 05:39:49.140849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.350 [2024-11-20 05:39:49.140876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.350 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.350 [2024-11-20 05:39:49.156864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.350 [2024-11-20 05:39:49.156895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.350 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.350 [2024-11-20 05:39:49.172928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.350 [2024-11-20 05:39:49.172974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.350 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.350 [2024-11-20 05:39:49.185469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.350 [2024-11-20 05:39:49.185497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.350 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.350 [2024-11-20 05:39:49.201376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.350 [2024-11-20 05:39:49.201404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.350 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.350 [2024-11-20 05:39:49.216523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.350 [2024-11-20 05:39:49.216550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.350 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.350 15143.67 IOPS, 118.31 MiB/s [2024-11-20T05:39:49.269Z] [2024-11-20 05:39:49.231268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.350 [2024-11-20 05:39:49.231297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.350 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.350 [2024-11-20 05:39:49.247558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.350 [2024-11-20 05:39:49.247583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.350 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.350 [2024-11-20 05:39:49.259055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.350 [2024-11-20 05:39:49.259081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.350 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.609 [2024-11-20 05:39:49.274059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.609 [2024-11-20 05:39:49.274086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.609 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.609 [2024-11-20 05:39:49.288944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.609 [2024-11-20 05:39:49.288971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.609 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.609 [2024-11-20 05:39:49.305188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.609 [2024-11-20 05:39:49.305218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.609 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.609 [2024-11-20 05:39:49.317186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.609 [2024-11-20 05:39:49.317215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.609 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.609 [2024-11-20 05:39:49.332846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.609 [2024-11-20 05:39:49.332872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.609 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.609 [2024-11-20 05:39:49.348975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.609 [2024-11-20 05:39:49.349003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.609 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.609 [2024-11-20 05:39:49.364584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.609 [2024-11-20 05:39:49.364613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.609 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.609 [2024-11-20 05:39:49.379739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.609 [2024-11-20 05:39:49.379766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.609 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.609 [2024-11-20 05:39:49.395107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.609 [2024-11-20 05:39:49.395136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.609 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.609 [2024-11-20 05:39:49.410357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.610 [2024-11-20 05:39:49.410386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.610 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.610 [2024-11-20 05:39:49.425682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.610 [2024-11-20 05:39:49.425709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.610 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.610 [2024-11-20 05:39:49.440390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.610 [2024-11-20 05:39:49.440418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.610 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.610 [2024-11-20 05:39:49.451506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.610 [2024-11-20 05:39:49.451530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.610 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.610 [2024-11-20 05:39:49.467161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.610 [2024-11-20 05:39:49.467188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.610 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.610 [2024-11-20 05:39:49.483350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.610 [2024-11-20 05:39:49.483376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.610 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.610 [2024-11-20 05:39:49.500019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.610 [2024-11-20 05:39:49.500046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.610 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.610 [2024-11-20 05:39:49.511581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.610 [2024-11-20 05:39:49.511607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.610 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.869 [2024-11-20 05:39:49.527300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.869 [2024-11-20 05:39:49.527326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.869 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.869 [2024-11-20 05:39:49.543363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.869 [2024-11-20 05:39:49.543391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.869 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.869 [2024-11-20 05:39:49.555371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.869 [2024-11-20 05:39:49.555399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.869 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.869 [2024-11-20 05:39:49.570793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.869 [2024-11-20 05:39:49.570821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.869 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.869 [2024-11-20 05:39:49.586416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.869 [2024-11-20 05:39:49.586453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.869 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.869 [2024-11-20 05:39:49.601198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.869 [2024-11-20 05:39:49.601226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.869 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.869 [2024-11-20 05:39:49.617060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.869 [2024-11-20 05:39:49.617090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.869 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.869 [2024-11-20 05:39:49.632292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.869 [2024-11-20 05:39:49.632321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.869 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.869 [2024-11-20 05:39:49.648501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.869 [2024-11-20 05:39:49.648528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.869 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.869 [2024-11-20 05:39:49.659842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.869 [2024-11-20 05:39:49.659872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.869 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.869 [2024-11-20 05:39:49.675656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.869 [2024-11-20 05:39:49.675683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.869 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.869 [2024-11-20 05:39:49.690936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.869 [2024-11-20 05:39:49.690964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.869 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.869 [2024-11-20 05:39:49.706169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.869 [2024-11-20 05:39:49.706198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.870 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.870 [2024-11-20 05:39:49.721681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.870 [2024-11-20 05:39:49.721709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.870 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.870 [2024-11-20 05:39:49.736081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.870 [2024-11-20 05:39:49.736109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.870 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.870 [2024-11-20 05:39:49.747874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.870 [2024-11-20 05:39:49.747902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.870 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.870 [2024-11-20 05:39:49.763353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.870 [2024-11-20 05:39:49.763379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.870 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:29.870 [2024-11-20 05:39:49.779710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.870 [2024-11-20 05:39:49.779736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.870 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:49.790346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:49.790370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:49.805903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:49.805929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:49.821477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:49.821504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:49.836010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:49.836037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:49.847169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:49.847195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:49.862630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:49.862658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:49.878343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:49.878371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:49.893938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:49.893967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:49.909706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:49.909733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:49.925033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:49.925063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:49.940377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:49.940405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:49.954236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:49.954264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:49.969700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:49.969728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:49.980528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:49.980554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:49.996191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:49.996219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:50.011640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:50.011667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:50.026295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:50.026324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.129 [2024-11-20 05:39:50.037422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.129 [2024-11-20 05:39:50.037466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.129 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.052792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.052817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.072813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.072839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.087850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.088004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.104236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.104263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.115949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.116161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.131644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.131833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.142685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.142877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.158185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.158407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.174038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.174277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.188643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.188848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.200062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.200217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.216177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.216306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 15180.50 IOPS, 118.60 MiB/s [2024-11-20T05:39:50.307Z] [2024-11-20 05:39:50.231973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.232083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.246540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.246646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.262035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.262214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.277876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.278072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.293147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.293324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.388 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.388 [2024-11-20 05:39:50.304321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.388 [2024-11-20 05:39:50.304509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.319612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.319704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.331340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.331523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.346299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.346397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.361763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.361880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.378030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.378215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.393024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.393153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.409073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.409108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.423478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.423509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.434911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.435046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.450581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.450613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.466555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.466586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.481644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.481674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.498044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.498075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.513601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.513631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.527589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.527618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.542574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.542604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.647 [2024-11-20 05:39:50.557540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.647 [2024-11-20 05:39:50.557571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.647 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.572869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.573008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.586969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.587003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.602522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.602554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.618606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.618638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.630269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.630301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.645669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.645822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.661491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.661522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.676715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.676747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.692577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.692610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.708055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.708088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.723668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.723701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.739363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.739527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.755267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.755429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.767023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.767054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.783080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.783112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.799690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.799736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:30.911 [2024-11-20 05:39:50.816010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.911 [2024-11-20 05:39:50.816228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.911 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:50.832682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:50.832857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:50.848397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:50.848597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:50.860148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:50.860297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:50.875816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:50.875959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:50.893223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:50.893258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:50.909718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:50.909866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:50.925402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:50.925544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:50.936533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:50.936565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:50.952504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:50.952536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:50.967652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:50.967801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:50.982147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:50.982178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:50.997171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:50.997307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:51.013221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:51.013350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:51.028460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:51.028604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:51.040045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:51.040187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:51.055869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:51.055899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:51.071383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:51.071533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.180 [2024-11-20 05:39:51.086816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.180 [2024-11-20 05:39:51.086943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.180 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.438 [2024-11-20 05:39:51.103140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.438 [2024-11-20 05:39:51.103170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.438 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.438 [2024-11-20 05:39:51.118495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.438 [2024-11-20 05:39:51.118525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.438 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.438 [2024-11-20 05:39:51.129675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.438 [2024-11-20 05:39:51.129706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 [2024-11-20 05:39:51.148664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.148810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 [2024-11-20 05:39:51.164338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.164507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 [2024-11-20 05:39:51.180705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.180737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 [2024-11-20 05:39:51.195063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.195092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 [2024-11-20 05:39:51.214411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.214474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 15172.60 IOPS, 118.54 MiB/s [2024-11-20T05:39:51.358Z] [2024-11-20 05:39:51.225185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.225220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 00:28:31.439 Latency(us) 00:28:31.439 [2024-11-20T05:39:51.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.439 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:28:31.439 Nvme1n1 : 5.01 15170.70 118.52 0.00 0.00 8428.04 3354.82 17601.10 00:28:31.439 [2024-11-20T05:39:51.358Z] =================================================================================================================== 00:28:31.439 [2024-11-20T05:39:51.358Z] Total : 15170.70 118.52 0.00 0.00 8428.04 3354.82 17601.10 00:28:31.439 [2024-11-20 05:39:51.234578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.234607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 [2024-11-20 05:39:51.246570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.246595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 [2024-11-20 05:39:51.258583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.258612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 [2024-11-20 05:39:51.270574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.270598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 [2024-11-20 05:39:51.282584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.282610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 [2024-11-20 05:39:51.294583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.294606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 [2024-11-20 05:39:51.306587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.306609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 [2024-11-20 05:39:51.318590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.318612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 [2024-11-20 05:39:51.330595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.330619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 [2024-11-20 05:39:51.342593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.342613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.439 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.439 [2024-11-20 05:39:51.354594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.439 [2024-11-20 05:39:51.354614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.698 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.698 [2024-11-20 05:39:51.366604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.698 [2024-11-20 05:39:51.366627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.698 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.698 [2024-11-20 05:39:51.378599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.698 [2024-11-20 05:39:51.378618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.698 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.698 [2024-11-20 05:39:51.390600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.698 [2024-11-20 05:39:51.390620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.698 2024/11/20 05:39:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:31.698 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (69343) - No such process 00:28:31.698 05:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 69343 00:28:31.698 05:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.698 05:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.698 05:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:31.698 05:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.698 05:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:31.699 05:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.699 05:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:31.699 delay0 00:28:31.699 05:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.699 05:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:28:31.699 05:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.699 05:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:31.699 05:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.699 05:39:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:28:31.699 [2024-11-20 05:39:51.579995] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:38.269 Initializing NVMe Controllers 00:28:38.269 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:28:38.269 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:38.269 Initialization complete. Launching workers. 00:28:38.269 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 173 00:28:38.269 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 460, failed to submit 33 00:28:38.269 success 272, unsuccessful 188, failed 0 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:38.269 rmmod nvme_tcp 00:28:38.269 rmmod nvme_fabrics 00:28:38.269 rmmod nvme_keyring 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 69170 ']' 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 69170 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 69170 ']' 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 69170 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69170 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:38.269 killing process with pid 69170 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69170' 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 69170 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 69170 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:38.269 05:39:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:38.269 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:38.270 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:38.270 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:38.270 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:38.270 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:38.270 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:38.270 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:38.270 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:38.270 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:38.270 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:38.270 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:38.529 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:38.529 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.529 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.529 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.529 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:28:38.529 00:28:38.529 real 0m24.843s 00:28:38.529 user 0m39.485s 00:28:38.529 sys 0m7.639s 00:28:38.529 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:38.529 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:38.529 ************************************ 00:28:38.529 END TEST nvmf_zcopy 00:28:38.529 ************************************ 00:28:38.529 05:39:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:28:38.529 05:39:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:38.529 05:39:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:38.529 05:39:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:28:38.529 ************************************ 00:28:38.529 START TEST nvmf_nmic 00:28:38.529 ************************************ 00:28:38.529 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:28:38.529 * Looking for test storage... 00:28:38.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:38.529 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:38.529 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:38.529 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:38.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.789 --rc genhtml_branch_coverage=1 00:28:38.789 --rc genhtml_function_coverage=1 00:28:38.789 --rc genhtml_legend=1 00:28:38.789 --rc geninfo_all_blocks=1 00:28:38.789 --rc geninfo_unexecuted_blocks=1 00:28:38.789 00:28:38.789 ' 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:38.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.789 --rc genhtml_branch_coverage=1 00:28:38.789 --rc genhtml_function_coverage=1 00:28:38.789 --rc genhtml_legend=1 00:28:38.789 --rc geninfo_all_blocks=1 00:28:38.789 --rc geninfo_unexecuted_blocks=1 00:28:38.789 00:28:38.789 ' 00:28:38.789 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:38.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.790 --rc genhtml_branch_coverage=1 00:28:38.790 --rc genhtml_function_coverage=1 00:28:38.790 --rc genhtml_legend=1 00:28:38.790 --rc geninfo_all_blocks=1 00:28:38.790 --rc geninfo_unexecuted_blocks=1 00:28:38.790 00:28:38.790 ' 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:38.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:38.790 --rc genhtml_branch_coverage=1 00:28:38.790 --rc genhtml_function_coverage=1 00:28:38.790 --rc genhtml_legend=1 00:28:38.790 --rc geninfo_all_blocks=1 00:28:38.790 --rc geninfo_unexecuted_blocks=1 00:28:38.790 00:28:38.790 ' 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:38.790 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:38.790 Cannot find device "nvmf_init_br" 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:38.790 Cannot find device "nvmf_init_br2" 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:38.790 Cannot find device "nvmf_tgt_br" 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:38.790 Cannot find device "nvmf_tgt_br2" 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:38.790 Cannot find device "nvmf_init_br" 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:38.790 Cannot find device "nvmf_init_br2" 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:28:38.790 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:38.791 Cannot find device "nvmf_tgt_br" 00:28:38.791 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:28:38.791 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:38.791 Cannot find device "nvmf_tgt_br2" 00:28:38.791 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:28:38.791 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:38.791 Cannot find device "nvmf_br" 00:28:38.791 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:28:38.791 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:38.791 Cannot find device "nvmf_init_if" 00:28:38.791 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:28:38.791 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:39.049 Cannot find device "nvmf_init_if2" 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:39.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:39.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:39.049 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:39.050 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:39.050 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:39.050 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:39.050 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:39.050 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:39.050 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:39.050 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:39.050 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:39.050 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:39.308 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:39.308 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:39.308 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:39.308 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:39.308 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:39.308 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:39.308 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:39.308 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:39.308 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:28:39.308 00:28:39.308 --- 10.0.0.3 ping statistics --- 00:28:39.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.308 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:28:39.308 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:39.308 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:39.308 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:28:39.308 00:28:39.308 --- 10.0.0.4 ping statistics --- 00:28:39.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.308 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:28:39.308 05:39:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:39.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:28:39.308 00:28:39.308 --- 10.0.0.1 ping statistics --- 00:28:39.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.308 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:28:39.308 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:39.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:28:39.308 00:28:39.308 --- 10.0.0.2 ping statistics --- 00:28:39.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.308 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:28:39.308 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.308 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:28:39.308 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:39.308 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.308 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:39.308 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:39.308 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.308 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:39.308 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:39.308 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:28:39.308 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:39.308 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:39.309 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:39.309 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=69720 00:28:39.309 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 69720 00:28:39.309 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:39.309 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 69720 ']' 00:28:39.309 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.309 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:39.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.309 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.309 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:39.309 05:39:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:39.309 [2024-11-20 05:39:59.107178] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:39.309 [2024-11-20 05:39:59.107273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.568 [2024-11-20 05:39:59.267425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:39.568 [2024-11-20 05:39:59.332044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.568 [2024-11-20 05:39:59.332106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.568 [2024-11-20 05:39:59.332125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.568 [2024-11-20 05:39:59.332142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.568 [2024-11-20 05:39:59.332157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.568 [2024-11-20 05:39:59.333273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.568 [2024-11-20 05:39:59.333373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.568 [2024-11-20 05:39:59.333408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.568 [2024-11-20 05:39:59.333413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:40.507 [2024-11-20 05:40:00.178280] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:40.507 Malloc0 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:40.507 [2024-11-20 05:40:00.237685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.507 test case1: single bdev can't be used in multiple subsystems 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:40.507 [2024-11-20 05:40:00.261534] bdev.c:8311:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:28:40.507 [2024-11-20 05:40:00.261567] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:28:40.507 [2024-11-20 05:40:00.261578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:40.507 2024/11/20 05:40:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:28:40.507 request: 00:28:40.507 { 00:28:40.507 "method": "nvmf_subsystem_add_ns", 00:28:40.507 "params": { 00:28:40.507 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:28:40.507 "namespace": { 00:28:40.507 "bdev_name": "Malloc0", 00:28:40.507 "no_auto_visible": false 00:28:40.507 } 00:28:40.507 } 00:28:40.507 } 00:28:40.507 Got JSON-RPC error response 00:28:40.507 GoRPCClient: error on JSON-RPC call 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:28:40.507 Adding namespace failed - expected result. 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:28:40.507 test case2: host connect to nvmf target in multiple paths 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:40.507 [2024-11-20 05:40:00.273679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.507 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:28:40.766 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:28:40.766 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:28:40.766 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:28:40.766 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:28:40.766 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:28:40.766 05:40:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:28:43.299 05:40:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:28:43.299 05:40:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:28:43.299 05:40:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:28:43.299 05:40:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:28:43.299 05:40:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:28:43.299 05:40:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:28:43.299 05:40:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:28:43.299 [global] 00:28:43.299 thread=1 00:28:43.299 invalidate=1 00:28:43.299 rw=write 00:28:43.299 time_based=1 00:28:43.299 runtime=1 00:28:43.299 ioengine=libaio 00:28:43.299 direct=1 00:28:43.299 bs=4096 00:28:43.299 iodepth=1 00:28:43.299 norandommap=0 00:28:43.299 numjobs=1 00:28:43.299 00:28:43.299 verify_dump=1 00:28:43.299 verify_backlog=512 00:28:43.299 verify_state_save=0 00:28:43.299 do_verify=1 00:28:43.299 verify=crc32c-intel 00:28:43.299 [job0] 00:28:43.299 filename=/dev/nvme0n1 00:28:43.299 Could not set queue depth (nvme0n1) 00:28:43.299 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:43.299 fio-3.35 00:28:43.299 Starting 1 thread 00:28:44.237 00:28:44.237 job0: (groupid=0, jobs=1): err= 0: pid=69830: Wed Nov 20 05:40:03 2024 00:28:44.237 read: IOPS=4280, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1000msec) 00:28:44.237 slat (nsec): min=10666, max=39069, avg=12774.85, stdev=2610.12 00:28:44.237 clat (usec): min=91, max=470, avg=110.06, stdev=13.17 00:28:44.237 lat (usec): min=102, max=485, avg=122.84, stdev=13.90 00:28:44.237 clat percentiles (usec): 00:28:44.237 | 1.00th=[ 96], 5.00th=[ 99], 10.00th=[ 100], 20.00th=[ 102], 00:28:44.237 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 111], 00:28:44.237 | 70.00th=[ 114], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 127], 00:28:44.237 | 99.00th=[ 143], 99.50th=[ 153], 99.90th=[ 255], 99.95th=[ 355], 00:28:44.237 | 99.99th=[ 469] 00:28:44.237 write: IOPS=4608, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1000msec); 0 zone resets 00:28:44.237 slat (usec): min=11, max=103, avg=19.24, stdev= 5.36 00:28:44.237 clat (usec): min=66, max=336, avg=81.49, stdev=10.75 00:28:44.237 lat (usec): min=81, max=368, avg=100.72, stdev=13.17 00:28:44.237 clat percentiles (usec): 00:28:44.237 | 1.00th=[ 69], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 75], 00:28:44.237 | 30.00th=[ 77], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 83], 00:28:44.237 | 70.00th=[ 85], 80.00th=[ 87], 90.00th=[ 91], 95.00th=[ 96], 00:28:44.237 | 99.00th=[ 111], 99.50th=[ 119], 99.90th=[ 202], 99.95th=[ 225], 00:28:44.237 | 99.99th=[ 338] 00:28:44.237 bw ( KiB/s): min=18048, max=18048, per=97.92%, avg=18048.00, stdev= 0.00, samples=1 00:28:44.237 iops : min= 4512, max= 4512, avg=4512.00, stdev= 0.00, samples=1 00:28:44.237 lat (usec) : 100=55.15%, 250=44.78%, 500=0.07% 00:28:44.237 cpu : usr=2.90%, sys=10.70%, ctx=8888, majf=0, minf=5 00:28:44.237 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:44.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:44.237 issued rwts: total=4280,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:44.237 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:44.237 00:28:44.237 Run status group 0 (all jobs): 00:28:44.237 READ: bw=16.7MiB/s (17.5MB/s), 16.7MiB/s-16.7MiB/s (17.5MB/s-17.5MB/s), io=16.7MiB (17.5MB), run=1000-1000msec 00:28:44.237 WRITE: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=18.0MiB (18.9MB), run=1000-1000msec 00:28:44.237 00:28:44.237 Disk stats (read/write): 00:28:44.237 nvme0n1: ios=3900/4096, merge=0/0, ticks=446/369, in_queue=815, util=91.28% 00:28:44.237 05:40:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:44.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:28:44.237 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:44.237 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:28:44.237 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:28:44.237 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:44.237 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:28:44.237 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:44.237 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:28:44.237 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:44.237 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:28:44.237 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:44.237 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:28:44.237 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:44.237 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:28:44.237 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:44.237 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:44.237 rmmod nvme_tcp 00:28:44.237 rmmod nvme_fabrics 00:28:44.237 rmmod nvme_keyring 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 69720 ']' 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 69720 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 69720 ']' 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 69720 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69720 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:44.496 killing process with pid 69720 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69720' 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 69720 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 69720 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:44.496 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:44.754 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:44.754 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:44.754 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:44.754 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:44.754 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:44.754 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:44.755 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:44.755 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:44.755 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:44.755 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:44.755 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:44.755 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:44.755 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.755 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.755 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.755 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:28:44.755 00:28:44.755 real 0m6.324s 00:28:44.755 user 0m19.886s 00:28:44.755 sys 0m1.806s 00:28:44.755 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:44.755 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:44.755 ************************************ 00:28:44.755 END TEST nvmf_nmic 00:28:44.755 ************************************ 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:28:45.014 ************************************ 00:28:45.014 START TEST nvmf_fio_target 00:28:45.014 ************************************ 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:28:45.014 * Looking for test storage... 00:28:45.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:45.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.014 --rc genhtml_branch_coverage=1 00:28:45.014 --rc genhtml_function_coverage=1 00:28:45.014 --rc genhtml_legend=1 00:28:45.014 --rc geninfo_all_blocks=1 00:28:45.014 --rc geninfo_unexecuted_blocks=1 00:28:45.014 00:28:45.014 ' 00:28:45.014 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:45.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.015 --rc genhtml_branch_coverage=1 00:28:45.015 --rc genhtml_function_coverage=1 00:28:45.015 --rc genhtml_legend=1 00:28:45.015 --rc geninfo_all_blocks=1 00:28:45.015 --rc geninfo_unexecuted_blocks=1 00:28:45.015 00:28:45.015 ' 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:45.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.015 --rc genhtml_branch_coverage=1 00:28:45.015 --rc genhtml_function_coverage=1 00:28:45.015 --rc genhtml_legend=1 00:28:45.015 --rc geninfo_all_blocks=1 00:28:45.015 --rc geninfo_unexecuted_blocks=1 00:28:45.015 00:28:45.015 ' 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:45.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.015 --rc genhtml_branch_coverage=1 00:28:45.015 --rc genhtml_function_coverage=1 00:28:45.015 --rc genhtml_legend=1 00:28:45.015 --rc geninfo_all_blocks=1 00:28:45.015 --rc geninfo_unexecuted_blocks=1 00:28:45.015 00:28:45.015 ' 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:45.015 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:45.015 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:45.274 Cannot find device "nvmf_init_br" 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:45.274 Cannot find device "nvmf_init_br2" 00:28:45.274 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:28:45.275 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:45.275 Cannot find device "nvmf_tgt_br" 00:28:45.275 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:28:45.275 05:40:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:45.275 Cannot find device "nvmf_tgt_br2" 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:45.275 Cannot find device "nvmf_init_br" 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:45.275 Cannot find device "nvmf_init_br2" 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:45.275 Cannot find device "nvmf_tgt_br" 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:45.275 Cannot find device "nvmf_tgt_br2" 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:45.275 Cannot find device "nvmf_br" 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:45.275 Cannot find device "nvmf_init_if" 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:45.275 Cannot find device "nvmf_init_if2" 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:45.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:45.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:45.275 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:45.534 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:45.534 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:28:45.534 00:28:45.534 --- 10.0.0.3 ping statistics --- 00:28:45.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.534 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:45.534 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:45.534 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:28:45.534 00:28:45.534 --- 10.0.0.4 ping statistics --- 00:28:45.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.534 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:45.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:28:45.534 00:28:45.534 --- 10.0.0.1 ping statistics --- 00:28:45.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.534 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:45.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:28:45.534 00:28:45.534 --- 10.0.0.2 ping statistics --- 00:28:45.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.534 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:28:45.534 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=70070 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 70070 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 70070 ']' 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:45.535 05:40:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:28:45.535 [2024-11-20 05:40:05.428185] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:45.535 [2024-11-20 05:40:05.428284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.794 [2024-11-20 05:40:05.582019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:45.794 [2024-11-20 05:40:05.634111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.794 [2024-11-20 05:40:05.634157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.794 [2024-11-20 05:40:05.634167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.794 [2024-11-20 05:40:05.634175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.794 [2024-11-20 05:40:05.634182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.794 [2024-11-20 05:40:05.635061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.794 [2024-11-20 05:40:05.636043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:45.794 [2024-11-20 05:40:05.636145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.794 [2024-11-20 05:40:05.636145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.861 05:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:46.861 05:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:28:46.861 05:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:46.861 05:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:46.861 05:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:28:46.862 05:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.862 05:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:46.862 [2024-11-20 05:40:06.624157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.862 05:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:47.120 05:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:28:47.120 05:40:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:47.378 05:40:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:28:47.378 05:40:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:47.637 05:40:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:28:47.637 05:40:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:47.894 05:40:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:28:47.894 05:40:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:28:48.152 05:40:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:48.409 05:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:28:48.409 05:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:48.667 05:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:28:48.667 05:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:48.926 05:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:28:48.926 05:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:28:48.926 05:40:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:49.494 05:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:28:49.494 05:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:49.494 05:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:28:49.494 05:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:49.752 05:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:50.012 [2024-11-20 05:40:09.789533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:50.012 05:40:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:28:50.270 05:40:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:28:50.530 05:40:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:28:50.789 05:40:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:28:50.789 05:40:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:28:50.789 05:40:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:28:50.789 05:40:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:28:50.789 05:40:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:28:50.789 05:40:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:28:52.691 05:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:28:52.691 05:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:28:52.691 05:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:28:52.691 05:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:28:52.691 05:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:28:52.691 05:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:28:52.691 05:40:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:28:52.691 [global] 00:28:52.691 thread=1 00:28:52.691 invalidate=1 00:28:52.691 rw=write 00:28:52.691 time_based=1 00:28:52.691 runtime=1 00:28:52.691 ioengine=libaio 00:28:52.691 direct=1 00:28:52.691 bs=4096 00:28:52.691 iodepth=1 00:28:52.691 norandommap=0 00:28:52.691 numjobs=1 00:28:52.691 00:28:52.691 verify_dump=1 00:28:52.691 verify_backlog=512 00:28:52.691 verify_state_save=0 00:28:52.691 do_verify=1 00:28:52.691 verify=crc32c-intel 00:28:52.691 [job0] 00:28:52.691 filename=/dev/nvme0n1 00:28:52.691 [job1] 00:28:52.691 filename=/dev/nvme0n2 00:28:52.691 [job2] 00:28:52.691 filename=/dev/nvme0n3 00:28:52.691 [job3] 00:28:52.691 filename=/dev/nvme0n4 00:28:52.691 Could not set queue depth (nvme0n1) 00:28:52.691 Could not set queue depth (nvme0n2) 00:28:52.691 Could not set queue depth (nvme0n3) 00:28:52.691 Could not set queue depth (nvme0n4) 00:28:52.949 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:52.949 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:52.950 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:52.950 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:52.950 fio-3.35 00:28:52.950 Starting 4 threads 00:28:54.325 00:28:54.325 job0: (groupid=0, jobs=1): err= 0: pid=70357: Wed Nov 20 05:40:13 2024 00:28:54.325 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:28:54.325 slat (nsec): min=8778, max=31455, avg=11221.29, stdev=2492.09 00:28:54.325 clat (usec): min=133, max=528, avg=161.39, stdev=17.39 00:28:54.325 lat (usec): min=145, max=544, avg=172.61, stdev=17.35 00:28:54.325 clat percentiles (usec): 00:28:54.325 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:28:54.325 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:28:54.325 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 188], 00:28:54.325 | 99.00th=[ 208], 99.50th=[ 217], 99.90th=[ 334], 99.95th=[ 515], 00:28:54.325 | 99.99th=[ 529] 00:28:54.325 write: IOPS=3321, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1001msec); 0 zone resets 00:28:54.325 slat (usec): min=12, max=137, avg=17.03, stdev= 5.59 00:28:54.325 clat (usec): min=93, max=211, avg=122.14, stdev=12.33 00:28:54.325 lat (usec): min=112, max=307, avg=139.17, stdev=13.55 00:28:54.325 clat percentiles (usec): 00:28:54.325 | 1.00th=[ 102], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 113], 00:28:54.325 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 123], 00:28:54.325 | 70.00th=[ 127], 80.00th=[ 131], 90.00th=[ 139], 95.00th=[ 147], 00:28:54.325 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 184], 99.95th=[ 202], 00:28:54.325 | 99.99th=[ 212] 00:28:54.325 bw ( KiB/s): min=13192, max=13192, per=30.57%, avg=13192.00, stdev= 0.00, samples=1 00:28:54.325 iops : min= 3298, max= 3298, avg=3298.00, stdev= 0.00, samples=1 00:28:54.325 lat (usec) : 100=0.16%, 250=99.75%, 500=0.06%, 750=0.03% 00:28:54.325 cpu : usr=1.90%, sys=7.10%, ctx=6398, majf=0, minf=9 00:28:54.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:54.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.325 issued rwts: total=3072,3325,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:54.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:54.325 job1: (groupid=0, jobs=1): err= 0: pid=70358: Wed Nov 20 05:40:13 2024 00:28:54.325 read: IOPS=1902, BW=7608KiB/s (7791kB/s)(7616KiB/1001msec) 00:28:54.325 slat (nsec): min=8269, max=60104, avg=11774.82, stdev=4541.00 00:28:54.325 clat (usec): min=220, max=7493, avg=273.49, stdev=180.59 00:28:54.325 lat (usec): min=229, max=7512, avg=285.26, stdev=180.94 00:28:54.325 clat percentiles (usec): 00:28:54.325 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 251], 00:28:54.325 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:28:54.325 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 302], 00:28:54.325 | 99.00th=[ 330], 99.50th=[ 392], 99.90th=[ 3163], 99.95th=[ 7504], 00:28:54.325 | 99.99th=[ 7504] 00:28:54.325 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:28:54.325 slat (nsec): min=12249, max=64306, avg=17134.75, stdev=5273.72 00:28:54.325 clat (usec): min=81, max=2141, avg=203.84, stdev=51.59 00:28:54.325 lat (usec): min=122, max=2156, avg=220.98, stdev=52.01 00:28:54.325 clat percentiles (usec): 00:28:54.325 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 188], 00:28:54.325 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:28:54.325 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 233], 00:28:54.325 | 99.00th=[ 258], 99.50th=[ 262], 99.90th=[ 562], 99.95th=[ 1090], 00:28:54.325 | 99.99th=[ 2147] 00:28:54.325 bw ( KiB/s): min= 8296, max= 8296, per=19.22%, avg=8296.00, stdev= 0.00, samples=1 00:28:54.325 iops : min= 2074, max= 2074, avg=2074.00, stdev= 0.00, samples=1 00:28:54.325 lat (usec) : 100=0.03%, 250=60.45%, 500=39.37%, 750=0.03% 00:28:54.325 lat (msec) : 2=0.05%, 4=0.05%, 10=0.03% 00:28:54.325 cpu : usr=0.90%, sys=4.70%, ctx=3953, majf=0, minf=15 00:28:54.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:54.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.325 issued rwts: total=1904,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:54.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:54.325 job2: (groupid=0, jobs=1): err= 0: pid=70359: Wed Nov 20 05:40:13 2024 00:28:54.325 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:28:54.325 slat (nsec): min=9045, max=35709, avg=11320.73, stdev=2428.77 00:28:54.325 clat (usec): min=130, max=451, avg=156.98, stdev=12.38 00:28:54.325 lat (usec): min=143, max=461, avg=168.30, stdev=12.40 00:28:54.325 clat percentiles (usec): 00:28:54.325 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:28:54.325 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:28:54.325 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 172], 95.00th=[ 178], 00:28:54.325 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 204], 99.95th=[ 215], 00:28:54.325 | 99.99th=[ 453] 00:28:54.325 write: IOPS=3375, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec); 0 zone resets 00:28:54.325 slat (nsec): min=12381, max=53411, avg=16858.41, stdev=3974.02 00:28:54.325 clat (usec): min=97, max=2121, avg=123.97, stdev=36.67 00:28:54.325 lat (usec): min=115, max=2136, avg=140.83, stdev=36.77 00:28:54.325 clat percentiles (usec): 00:28:54.325 | 1.00th=[ 104], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 115], 00:28:54.325 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 125], 00:28:54.325 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 143], 00:28:54.325 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 174], 99.95th=[ 537], 00:28:54.325 | 99.99th=[ 2114] 00:28:54.325 bw ( KiB/s): min=13560, max=13560, per=31.42%, avg=13560.00, stdev= 0.00, samples=1 00:28:54.325 iops : min= 3390, max= 3390, avg=3390.00, stdev= 0.00, samples=1 00:28:54.325 lat (usec) : 100=0.06%, 250=99.89%, 500=0.02%, 750=0.02% 00:28:54.325 lat (msec) : 4=0.02% 00:28:54.325 cpu : usr=1.30%, sys=7.80%, ctx=6451, majf=0, minf=9 00:28:54.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:54.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.325 issued rwts: total=3072,3379,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:54.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:54.325 job3: (groupid=0, jobs=1): err= 0: pid=70360: Wed Nov 20 05:40:13 2024 00:28:54.325 read: IOPS=1972, BW=7888KiB/s (8077kB/s)(7896KiB/1001msec) 00:28:54.325 slat (nsec): min=9356, max=42874, avg=14088.86, stdev=3186.21 00:28:54.325 clat (usec): min=143, max=672, avg=261.71, stdev=28.36 00:28:54.325 lat (usec): min=159, max=686, avg=275.80, stdev=28.60 00:28:54.325 clat percentiles (usec): 00:28:54.325 | 1.00th=[ 161], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 245], 00:28:54.325 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:28:54.325 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:28:54.325 | 99.00th=[ 314], 99.50th=[ 318], 99.90th=[ 603], 99.95th=[ 676], 00:28:54.325 | 99.99th=[ 676] 00:28:54.325 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:28:54.325 slat (nsec): min=12444, max=99402, avg=19830.20, stdev=7622.98 00:28:54.325 clat (usec): min=111, max=1564, avg=199.89, stdev=38.20 00:28:54.325 lat (usec): min=127, max=1579, avg=219.72, stdev=37.39 00:28:54.325 clat percentiles (usec): 00:28:54.325 | 1.00th=[ 149], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 182], 00:28:54.325 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:28:54.325 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 225], 95.00th=[ 233], 00:28:54.325 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 379], 99.95th=[ 478], 00:28:54.325 | 99.99th=[ 1565] 00:28:54.325 bw ( KiB/s): min= 8288, max= 8288, per=19.20%, avg=8288.00, stdev= 0.00, samples=1 00:28:54.325 iops : min= 2072, max= 2072, avg=2072.00, stdev= 0.00, samples=1 00:28:54.325 lat (usec) : 250=63.10%, 500=36.82%, 750=0.05% 00:28:54.325 lat (msec) : 2=0.02% 00:28:54.325 cpu : usr=1.20%, sys=5.60%, ctx=4022, majf=0, minf=15 00:28:54.325 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:54.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.325 issued rwts: total=1974,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:54.325 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:54.325 00:28:54.325 Run status group 0 (all jobs): 00:28:54.325 READ: bw=39.1MiB/s (41.0MB/s), 7608KiB/s-12.0MiB/s (7791kB/s-12.6MB/s), io=39.1MiB (41.1MB), run=1001-1001msec 00:28:54.325 WRITE: bw=42.1MiB/s (44.2MB/s), 8184KiB/s-13.2MiB/s (8380kB/s-13.8MB/s), io=42.2MiB (44.2MB), run=1001-1001msec 00:28:54.325 00:28:54.325 Disk stats (read/write): 00:28:54.325 nvme0n1: ios=2610/2882, merge=0/0, ticks=479/369, in_queue=848, util=90.97% 00:28:54.325 nvme0n2: ios=1580/1905, merge=0/0, ticks=484/394, in_queue=878, util=91.65% 00:28:54.325 nvme0n3: ios=2560/2950, merge=0/0, ticks=396/373, in_queue=769, util=88.91% 00:28:54.325 nvme0n4: ios=1536/1920, merge=0/0, ticks=397/401, in_queue=798, util=89.66% 00:28:54.325 05:40:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:28:54.325 [global] 00:28:54.325 thread=1 00:28:54.325 invalidate=1 00:28:54.325 rw=randwrite 00:28:54.325 time_based=1 00:28:54.325 runtime=1 00:28:54.325 ioengine=libaio 00:28:54.325 direct=1 00:28:54.325 bs=4096 00:28:54.325 iodepth=1 00:28:54.325 norandommap=0 00:28:54.325 numjobs=1 00:28:54.325 00:28:54.325 verify_dump=1 00:28:54.325 verify_backlog=512 00:28:54.325 verify_state_save=0 00:28:54.325 do_verify=1 00:28:54.325 verify=crc32c-intel 00:28:54.325 [job0] 00:28:54.325 filename=/dev/nvme0n1 00:28:54.325 [job1] 00:28:54.325 filename=/dev/nvme0n2 00:28:54.325 [job2] 00:28:54.325 filename=/dev/nvme0n3 00:28:54.325 [job3] 00:28:54.325 filename=/dev/nvme0n4 00:28:54.325 Could not set queue depth (nvme0n1) 00:28:54.326 Could not set queue depth (nvme0n2) 00:28:54.326 Could not set queue depth (nvme0n3) 00:28:54.326 Could not set queue depth (nvme0n4) 00:28:54.326 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:54.326 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:54.326 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:54.326 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:54.326 fio-3.35 00:28:54.326 Starting 4 threads 00:28:55.701 00:28:55.701 job0: (groupid=0, jobs=1): err= 0: pid=70418: Wed Nov 20 05:40:15 2024 00:28:55.701 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:28:55.701 slat (nsec): min=8462, max=54018, avg=10500.62, stdev=2437.26 00:28:55.701 clat (usec): min=123, max=5965, avg=261.04, stdev=143.77 00:28:55.701 lat (usec): min=132, max=5976, avg=271.54, stdev=144.09 00:28:55.701 clat percentiles (usec): 00:28:55.701 | 1.00th=[ 147], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 243], 00:28:55.701 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:28:55.701 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:28:55.701 | 99.00th=[ 371], 99.50th=[ 603], 99.90th=[ 1500], 99.95th=[ 2343], 00:28:55.701 | 99.99th=[ 5997] 00:28:55.701 write: IOPS=2048, BW=8196KiB/s (8393kB/s)(8204KiB/1001msec); 0 zone resets 00:28:55.701 slat (nsec): min=12407, max=71532, avg=15578.46, stdev=5262.54 00:28:55.701 clat (usec): min=93, max=2345, avg=198.70, stdev=68.74 00:28:55.701 lat (usec): min=108, max=2371, avg=214.28, stdev=69.02 00:28:55.701 clat percentiles (usec): 00:28:55.701 | 1.00th=[ 104], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 188], 00:28:55.701 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 200], 00:28:55.701 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 225], 00:28:55.701 | 99.00th=[ 247], 99.50th=[ 297], 99.90th=[ 898], 99.95th=[ 2057], 00:28:55.701 | 99.99th=[ 2343] 00:28:55.701 bw ( KiB/s): min= 8192, max= 8192, per=23.33%, avg=8192.00, stdev= 0.00, samples=1 00:28:55.701 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:28:55.701 lat (usec) : 100=0.17%, 250=70.19%, 500=29.23%, 750=0.22%, 1000=0.05% 00:28:55.701 lat (msec) : 2=0.05%, 4=0.07%, 10=0.02% 00:28:55.701 cpu : usr=1.10%, sys=4.10%, ctx=4100, majf=0, minf=9 00:28:55.701 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.702 issued rwts: total=2048,2051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.702 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:55.702 job1: (groupid=0, jobs=1): err= 0: pid=70419: Wed Nov 20 05:40:15 2024 00:28:55.702 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:28:55.702 slat (nsec): min=7116, max=37301, avg=9903.71, stdev=2217.47 00:28:55.702 clat (usec): min=138, max=491, avg=242.84, stdev=19.60 00:28:55.702 lat (usec): min=146, max=502, avg=252.74, stdev=19.52 00:28:55.702 clat percentiles (usec): 00:28:55.702 | 1.00th=[ 180], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 231], 00:28:55.702 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:28:55.702 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:28:55.702 | 99.00th=[ 314], 99.50th=[ 326], 99.90th=[ 351], 99.95th=[ 367], 00:28:55.702 | 99.99th=[ 494] 00:28:55.702 write: IOPS=2328, BW=9315KiB/s (9538kB/s)(9324KiB/1001msec); 0 zone resets 00:28:55.702 slat (usec): min=8, max=140, avg=17.07, stdev= 7.20 00:28:55.702 clat (usec): min=88, max=365, avg=187.79, stdev=25.92 00:28:55.702 lat (usec): min=102, max=457, avg=204.85, stdev=26.42 00:28:55.702 clat percentiles (usec): 00:28:55.702 | 1.00th=[ 104], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 174], 00:28:55.702 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:28:55.702 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 227], 00:28:55.702 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 343], 99.95th=[ 359], 00:28:55.702 | 99.99th=[ 367] 00:28:55.702 bw ( KiB/s): min= 9040, max= 9040, per=25.74%, avg=9040.00, stdev= 0.00, samples=1 00:28:55.702 iops : min= 2260, max= 2260, avg=2260.00, stdev= 0.00, samples=1 00:28:55.702 lat (usec) : 100=0.34%, 250=85.52%, 500=14.14% 00:28:55.702 cpu : usr=0.80%, sys=5.40%, ctx=4389, majf=0, minf=11 00:28:55.702 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.702 issued rwts: total=2048,2331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.702 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:55.702 job2: (groupid=0, jobs=1): err= 0: pid=70420: Wed Nov 20 05:40:15 2024 00:28:55.702 read: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec) 00:28:55.702 slat (nsec): min=7224, max=39704, avg=9986.58, stdev=2535.50 00:28:55.702 clat (usec): min=151, max=401, avg=242.81, stdev=18.37 00:28:55.702 lat (usec): min=159, max=421, avg=252.80, stdev=18.27 00:28:55.702 clat percentiles (usec): 00:28:55.702 | 1.00th=[ 202], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:28:55.702 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:28:55.702 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 269], 00:28:55.702 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 359], 99.95th=[ 367], 00:28:55.702 | 99.99th=[ 400] 00:28:55.702 write: IOPS=2333, BW=9332KiB/s (9556kB/s)(9332KiB/1000msec); 0 zone resets 00:28:55.702 slat (usec): min=7, max=114, avg=17.13, stdev= 6.74 00:28:55.702 clat (usec): min=83, max=400, avg=187.52, stdev=23.97 00:28:55.702 lat (usec): min=104, max=429, avg=204.65, stdev=24.08 00:28:55.702 clat percentiles (usec): 00:28:55.702 | 1.00th=[ 113], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 174], 00:28:55.702 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:28:55.702 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 225], 00:28:55.702 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 322], 99.95th=[ 359], 00:28:55.702 | 99.99th=[ 400] 00:28:55.702 bw ( KiB/s): min= 9050, max= 9050, per=25.77%, avg=9050.00, stdev= 0.00, samples=1 00:28:55.702 iops : min= 2262, max= 2262, avg=2262.00, stdev= 0.00, samples=1 00:28:55.702 lat (usec) : 100=0.05%, 250=86.28%, 500=13.67% 00:28:55.702 cpu : usr=1.10%, sys=5.20%, ctx=4387, majf=0, minf=17 00:28:55.702 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.702 issued rwts: total=2048,2333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.702 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:55.702 job3: (groupid=0, jobs=1): err= 0: pid=70421: Wed Nov 20 05:40:15 2024 00:28:55.702 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:28:55.702 slat (nsec): min=8625, max=33417, avg=11349.15, stdev=2709.18 00:28:55.702 clat (usec): min=142, max=2421, avg=256.62, stdev=66.98 00:28:55.702 lat (usec): min=152, max=2442, avg=267.97, stdev=66.96 00:28:55.702 clat percentiles (usec): 00:28:55.702 | 1.00th=[ 153], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:28:55.702 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:28:55.702 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 293], 00:28:55.702 | 99.00th=[ 359], 99.50th=[ 371], 99.90th=[ 693], 99.95th=[ 1811], 00:28:55.702 | 99.99th=[ 2409] 00:28:55.702 write: IOPS=2070, BW=8284KiB/s (8483kB/s)(8292KiB/1001msec); 0 zone resets 00:28:55.702 slat (nsec): min=12580, max=77761, avg=17756.33, stdev=4938.97 00:28:55.702 clat (usec): min=112, max=629, avg=197.66, stdev=25.08 00:28:55.702 lat (usec): min=132, max=649, avg=215.41, stdev=25.28 00:28:55.702 clat percentiles (usec): 00:28:55.702 | 1.00th=[ 125], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 186], 00:28:55.702 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 198], 00:28:55.702 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 231], 00:28:55.702 | 99.00th=[ 262], 99.50th=[ 371], 99.90th=[ 424], 99.95th=[ 429], 00:28:55.702 | 99.99th=[ 627] 00:28:55.702 bw ( KiB/s): min= 8376, max= 8376, per=23.85%, avg=8376.00, stdev= 0.00, samples=1 00:28:55.702 iops : min= 2094, max= 2094, avg=2094.00, stdev= 0.00, samples=1 00:28:55.702 lat (usec) : 250=70.03%, 500=29.82%, 750=0.10% 00:28:55.702 lat (msec) : 2=0.02%, 4=0.02% 00:28:55.702 cpu : usr=1.50%, sys=4.40%, ctx=4121, majf=0, minf=11 00:28:55.702 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:55.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.702 issued rwts: total=2048,2073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.702 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:55.702 00:28:55.702 Run status group 0 (all jobs): 00:28:55.702 READ: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8192KiB/s (8380kB/s-8389kB/s), io=32.0MiB (33.6MB), run=1000-1001msec 00:28:55.702 WRITE: bw=34.3MiB/s (36.0MB/s), 8196KiB/s-9332KiB/s (8393kB/s-9556kB/s), io=34.3MiB (36.0MB), run=1000-1001msec 00:28:55.702 00:28:55.702 Disk stats (read/write): 00:28:55.702 nvme0n1: ios=1598/2048, merge=0/0, ticks=469/411, in_queue=880, util=88.87% 00:28:55.702 nvme0n2: ios=1789/2048, merge=0/0, ticks=460/395, in_queue=855, util=89.79% 00:28:55.702 nvme0n3: ios=1739/2048, merge=0/0, ticks=424/399, in_queue=823, util=89.31% 00:28:55.702 nvme0n4: ios=1573/2048, merge=0/0, ticks=412/409, in_queue=821, util=89.77% 00:28:55.702 05:40:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:28:55.702 [global] 00:28:55.702 thread=1 00:28:55.702 invalidate=1 00:28:55.702 rw=write 00:28:55.702 time_based=1 00:28:55.702 runtime=1 00:28:55.702 ioengine=libaio 00:28:55.702 direct=1 00:28:55.702 bs=4096 00:28:55.702 iodepth=128 00:28:55.702 norandommap=0 00:28:55.702 numjobs=1 00:28:55.702 00:28:55.702 verify_dump=1 00:28:55.702 verify_backlog=512 00:28:55.702 verify_state_save=0 00:28:55.702 do_verify=1 00:28:55.702 verify=crc32c-intel 00:28:55.702 [job0] 00:28:55.702 filename=/dev/nvme0n1 00:28:55.702 [job1] 00:28:55.702 filename=/dev/nvme0n2 00:28:55.702 [job2] 00:28:55.702 filename=/dev/nvme0n3 00:28:55.702 [job3] 00:28:55.702 filename=/dev/nvme0n4 00:28:55.702 Could not set queue depth (nvme0n1) 00:28:55.702 Could not set queue depth (nvme0n2) 00:28:55.702 Could not set queue depth (nvme0n3) 00:28:55.702 Could not set queue depth (nvme0n4) 00:28:55.702 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:55.702 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:55.702 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:55.702 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:55.702 fio-3.35 00:28:55.702 Starting 4 threads 00:28:57.078 00:28:57.078 job0: (groupid=0, jobs=1): err= 0: pid=70476: Wed Nov 20 05:40:16 2024 00:28:57.078 read: IOPS=3441, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1002msec) 00:28:57.078 slat (usec): min=7, max=7228, avg=137.31, stdev=680.61 00:28:57.078 clat (usec): min=770, max=29937, avg=17209.54, stdev=3023.21 00:28:57.078 lat (usec): min=5078, max=31902, avg=17346.85, stdev=3083.75 00:28:57.078 clat percentiles (usec): 00:28:57.078 | 1.00th=[ 5932], 5.00th=[13435], 10.00th=[14353], 20.00th=[15270], 00:28:57.078 | 30.00th=[15795], 40.00th=[16581], 50.00th=[17171], 60.00th=[17695], 00:28:57.078 | 70.00th=[17957], 80.00th=[18744], 90.00th=[20579], 95.00th=[22414], 00:28:57.078 | 99.00th=[26346], 99.50th=[28443], 99.90th=[28967], 99.95th=[28967], 00:28:57.078 | 99.99th=[30016] 00:28:57.078 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:28:57.078 slat (usec): min=13, max=4885, avg=137.81, stdev=516.29 00:28:57.078 clat (usec): min=10512, max=32388, avg=18703.37, stdev=5671.97 00:28:57.078 lat (usec): min=10537, max=32605, avg=18841.18, stdev=5709.79 00:28:57.078 clat percentiles (usec): 00:28:57.078 | 1.00th=[10683], 5.00th=[11207], 10.00th=[11338], 20.00th=[11994], 00:28:57.078 | 30.00th=[15664], 40.00th=[16909], 50.00th=[18220], 60.00th=[19792], 00:28:57.078 | 70.00th=[21627], 80.00th=[23987], 90.00th=[27919], 95.00th=[28181], 00:28:57.078 | 99.00th=[30016], 99.50th=[31589], 99.90th=[32375], 99.95th=[32375], 00:28:57.078 | 99.99th=[32375] 00:28:57.078 bw ( KiB/s): min=12456, max=16240, per=25.10%, avg=14348.00, stdev=2675.69, samples=2 00:28:57.078 iops : min= 3114, max= 4060, avg=3587.00, stdev=668.92, samples=2 00:28:57.078 lat (usec) : 1000=0.01% 00:28:57.078 lat (msec) : 10=0.60%, 20=73.51%, 50=25.88% 00:28:57.078 cpu : usr=3.00%, sys=11.89%, ctx=447, majf=0, minf=16 00:28:57.078 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:57.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:57.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:57.078 issued rwts: total=3448,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:57.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:57.078 job1: (groupid=0, jobs=1): err= 0: pid=70477: Wed Nov 20 05:40:16 2024 00:28:57.078 read: IOPS=3261, BW=12.7MiB/s (13.4MB/s)(12.8MiB/1003msec) 00:28:57.078 slat (usec): min=3, max=6628, avg=150.58, stdev=578.47 00:28:57.078 clat (usec): min=2313, max=26252, avg=18600.30, stdev=2726.64 00:28:57.078 lat (usec): min=4903, max=26271, avg=18750.88, stdev=2759.08 00:28:57.078 clat percentiles (usec): 00:28:57.078 | 1.00th=[ 7832], 5.00th=[14222], 10.00th=[15533], 20.00th=[16909], 00:28:57.078 | 30.00th=[17695], 40.00th=[18220], 50.00th=[18744], 60.00th=[19268], 00:28:57.078 | 70.00th=[19530], 80.00th=[20317], 90.00th=[21627], 95.00th=[23200], 00:28:57.078 | 99.00th=[24773], 99.50th=[25035], 99.90th=[26084], 99.95th=[26084], 00:28:57.078 | 99.99th=[26346] 00:28:57.078 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:28:57.078 slat (usec): min=6, max=5814, avg=134.38, stdev=539.98 00:28:57.078 clat (usec): min=10520, max=26350, avg=18361.39, stdev=2245.30 00:28:57.078 lat (usec): min=10538, max=26428, avg=18495.77, stdev=2299.97 00:28:57.078 clat percentiles (usec): 00:28:57.078 | 1.00th=[12256], 5.00th=[14091], 10.00th=[15795], 20.00th=[16909], 00:28:57.078 | 30.00th=[17695], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:28:57.078 | 70.00th=[19268], 80.00th=[19792], 90.00th=[20841], 95.00th=[22152], 00:28:57.078 | 99.00th=[25035], 99.50th=[25297], 99.90th=[26084], 99.95th=[26084], 00:28:57.078 | 99.99th=[26346] 00:28:57.078 bw ( KiB/s): min=14296, max=14376, per=25.08%, avg=14336.00, stdev=56.57, samples=2 00:28:57.079 iops : min= 3574, max= 3594, avg=3584.00, stdev=14.14, samples=2 00:28:57.079 lat (msec) : 4=0.01%, 10=0.61%, 20=79.02%, 50=20.35% 00:28:57.079 cpu : usr=2.79%, sys=8.88%, ctx=1207, majf=0, minf=11 00:28:57.079 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:57.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:57.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:57.079 issued rwts: total=3271,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:57.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:57.079 job2: (groupid=0, jobs=1): err= 0: pid=70478: Wed Nov 20 05:40:16 2024 00:28:57.079 read: IOPS=3225, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1002msec) 00:28:57.079 slat (usec): min=3, max=5031, avg=145.63, stdev=545.12 00:28:57.079 clat (usec): min=1287, max=25079, avg=18515.36, stdev=2870.08 00:28:57.079 lat (usec): min=1656, max=26901, avg=18660.99, stdev=2899.48 00:28:57.079 clat percentiles (usec): 00:28:57.079 | 1.00th=[ 2212], 5.00th=[15533], 10.00th=[16450], 20.00th=[17433], 00:28:57.079 | 30.00th=[17957], 40.00th=[18482], 50.00th=[18744], 60.00th=[19268], 00:28:57.079 | 70.00th=[19530], 80.00th=[20317], 90.00th=[21103], 95.00th=[21627], 00:28:57.079 | 99.00th=[22676], 99.50th=[22938], 99.90th=[24249], 99.95th=[24249], 00:28:57.079 | 99.99th=[25035] 00:28:57.079 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:28:57.079 slat (usec): min=6, max=5827, avg=140.59, stdev=516.17 00:28:57.079 clat (usec): min=13422, max=24116, avg=18573.21, stdev=1420.53 00:28:57.079 lat (usec): min=13460, max=24133, avg=18713.80, stdev=1481.81 00:28:57.079 clat percentiles (usec): 00:28:57.079 | 1.00th=[15401], 5.00th=[16319], 10.00th=[16909], 20.00th=[17433], 00:28:57.079 | 30.00th=[17695], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:28:57.079 | 70.00th=[19268], 80.00th=[19792], 90.00th=[20579], 95.00th=[21103], 00:28:57.079 | 99.00th=[22414], 99.50th=[22676], 99.90th=[23725], 99.95th=[23725], 00:28:57.079 | 99.99th=[23987] 00:28:57.079 bw ( KiB/s): min=13744, max=14898, per=25.05%, avg=14321.00, stdev=816.00, samples=2 00:28:57.079 iops : min= 3436, max= 3724, avg=3580.00, stdev=203.65, samples=2 00:28:57.079 lat (msec) : 2=0.31%, 4=0.31%, 10=0.60%, 20=79.02%, 50=19.76% 00:28:57.079 cpu : usr=3.00%, sys=8.79%, ctx=1204, majf=0, minf=7 00:28:57.079 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:57.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:57.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:57.079 issued rwts: total=3232,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:57.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:57.079 job3: (groupid=0, jobs=1): err= 0: pid=70479: Wed Nov 20 05:40:16 2024 00:28:57.079 read: IOPS=3209, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1002msec) 00:28:57.079 slat (usec): min=8, max=5441, avg=155.10, stdev=680.21 00:28:57.079 clat (usec): min=1017, max=30514, avg=19862.54, stdev=3712.56 00:28:57.079 lat (usec): min=3703, max=30549, avg=20017.64, stdev=3687.60 00:28:57.079 clat percentiles (usec): 00:28:57.079 | 1.00th=[ 5997], 5.00th=[15139], 10.00th=[15926], 20.00th=[17171], 00:28:57.079 | 30.00th=[17695], 40.00th=[19006], 50.00th=[19530], 60.00th=[20579], 00:28:57.079 | 70.00th=[21890], 80.00th=[22938], 90.00th=[23725], 95.00th=[25297], 00:28:57.079 | 99.00th=[30016], 99.50th=[30278], 99.90th=[30278], 99.95th=[30540], 00:28:57.079 | 99.99th=[30540] 00:28:57.079 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:28:57.079 slat (usec): min=11, max=4885, avg=130.28, stdev=544.20 00:28:57.079 clat (usec): min=9137, max=32749, avg=17366.14, stdev=4777.20 00:28:57.079 lat (usec): min=11080, max=32783, avg=17496.42, stdev=4783.13 00:28:57.079 clat percentiles (usec): 00:28:57.079 | 1.00th=[11469], 5.00th=[12911], 10.00th=[13304], 20.00th=[13698], 00:28:57.079 | 30.00th=[13960], 40.00th=[14353], 50.00th=[15795], 60.00th=[17171], 00:28:57.079 | 70.00th=[18482], 80.00th=[20841], 90.00th=[25035], 95.00th=[28705], 00:28:57.079 | 99.00th=[31327], 99.50th=[31589], 99.90th=[32637], 99.95th=[32637], 00:28:57.079 | 99.99th=[32637] 00:28:57.079 bw ( KiB/s): min=13712, max=14989, per=25.10%, avg=14350.50, stdev=902.98, samples=2 00:28:57.079 iops : min= 3428, max= 3747, avg=3587.50, stdev=225.57, samples=2 00:28:57.079 lat (msec) : 2=0.01%, 4=0.18%, 10=0.31%, 20=66.51%, 50=32.99% 00:28:57.079 cpu : usr=3.70%, sys=11.19%, ctx=333, majf=0, minf=15 00:28:57.079 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:57.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:57.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:57.079 issued rwts: total=3216,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:57.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:57.079 00:28:57.079 Run status group 0 (all jobs): 00:28:57.079 READ: bw=51.3MiB/s (53.8MB/s), 12.5MiB/s-13.4MiB/s (13.1MB/s-14.1MB/s), io=51.4MiB (53.9MB), run=1002-1003msec 00:28:57.079 WRITE: bw=55.8MiB/s (58.5MB/s), 14.0MiB/s-14.0MiB/s (14.6MB/s-14.7MB/s), io=56.0MiB (58.7MB), run=1002-1003msec 00:28:57.079 00:28:57.079 Disk stats (read/write): 00:28:57.079 nvme0n1: ios=3114/3079, merge=0/0, ticks=16543/17163, in_queue=33706, util=88.57% 00:28:57.079 nvme0n2: ios=2865/3072, merge=0/0, ticks=16820/16438, in_queue=33258, util=89.27% 00:28:57.079 nvme0n3: ios=2730/3072, merge=0/0, ticks=16124/17056, in_queue=33180, util=89.08% 00:28:57.079 nvme0n4: ios=2651/3072, merge=0/0, ticks=13155/11980, in_queue=25135, util=89.74% 00:28:57.079 05:40:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:28:57.079 [global] 00:28:57.079 thread=1 00:28:57.079 invalidate=1 00:28:57.079 rw=randwrite 00:28:57.079 time_based=1 00:28:57.079 runtime=1 00:28:57.079 ioengine=libaio 00:28:57.079 direct=1 00:28:57.079 bs=4096 00:28:57.079 iodepth=128 00:28:57.079 norandommap=0 00:28:57.079 numjobs=1 00:28:57.079 00:28:57.079 verify_dump=1 00:28:57.079 verify_backlog=512 00:28:57.079 verify_state_save=0 00:28:57.079 do_verify=1 00:28:57.079 verify=crc32c-intel 00:28:57.079 [job0] 00:28:57.079 filename=/dev/nvme0n1 00:28:57.079 [job1] 00:28:57.079 filename=/dev/nvme0n2 00:28:57.079 [job2] 00:28:57.079 filename=/dev/nvme0n3 00:28:57.079 [job3] 00:28:57.079 filename=/dev/nvme0n4 00:28:57.079 Could not set queue depth (nvme0n1) 00:28:57.079 Could not set queue depth (nvme0n2) 00:28:57.079 Could not set queue depth (nvme0n3) 00:28:57.079 Could not set queue depth (nvme0n4) 00:28:57.079 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:57.079 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:57.079 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:57.079 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:28:57.079 fio-3.35 00:28:57.079 Starting 4 threads 00:28:58.487 00:28:58.487 job0: (groupid=0, jobs=1): err= 0: pid=70538: Wed Nov 20 05:40:18 2024 00:28:58.487 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:28:58.487 slat (usec): min=5, max=21514, avg=153.00, stdev=1050.57 00:28:58.487 clat (usec): min=5576, max=45612, avg=18538.10, stdev=7875.19 00:28:58.487 lat (usec): min=5592, max=45635, avg=18691.10, stdev=7946.09 00:28:58.487 clat percentiles (usec): 00:28:58.487 | 1.00th=[ 6521], 5.00th=[10028], 10.00th=[10159], 20.00th=[11600], 00:28:58.487 | 30.00th=[11863], 40.00th=[13698], 50.00th=[17433], 60.00th=[20317], 00:28:58.487 | 70.00th=[23462], 80.00th=[24511], 90.00th=[30540], 95.00th=[33162], 00:28:58.487 | 99.00th=[40109], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:28:58.487 | 99.99th=[45351] 00:28:58.487 write: IOPS=2719, BW=10.6MiB/s (11.1MB/s)(10.8MiB/1014msec); 0 zone resets 00:28:58.487 slat (usec): min=6, max=17606, avg=211.22, stdev=1049.64 00:28:58.487 clat (msec): min=3, max=103, avg=29.36, stdev=17.48 00:28:58.487 lat (msec): min=3, max=103, avg=29.57, stdev=17.56 00:28:58.487 clat percentiles (msec): 00:28:58.487 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 18], 20.00th=[ 22], 00:28:58.487 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 24], 00:28:58.487 | 70.00th=[ 25], 80.00th=[ 36], 90.00th=[ 57], 95.00th=[ 65], 00:28:58.487 | 99.00th=[ 99], 99.50th=[ 103], 99.90th=[ 104], 99.95th=[ 104], 00:28:58.487 | 99.99th=[ 104] 00:28:58.487 bw ( KiB/s): min=10032, max=11008, per=15.27%, avg=10520.00, stdev=690.14, samples=2 00:28:58.487 iops : min= 2508, max= 2752, avg=2630.00, stdev=172.53, samples=2 00:28:58.487 lat (msec) : 4=0.08%, 10=5.13%, 20=29.94%, 50=58.33%, 100=6.11% 00:28:58.487 lat (msec) : 250=0.41% 00:28:58.487 cpu : usr=2.57%, sys=8.19%, ctx=349, majf=0, minf=11 00:28:58.487 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:58.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:58.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:58.487 issued rwts: total=2560,2758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:58.487 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:58.487 job1: (groupid=0, jobs=1): err= 0: pid=70539: Wed Nov 20 05:40:18 2024 00:28:58.487 read: IOPS=6096, BW=23.8MiB/s (25.0MB/s)(23.9MiB/1005msec) 00:28:58.487 slat (usec): min=5, max=9910, avg=86.09, stdev=546.84 00:28:58.487 clat (usec): min=2498, max=20770, avg=11210.55, stdev=2645.86 00:28:58.487 lat (usec): min=4527, max=20786, avg=11296.64, stdev=2672.39 00:28:58.487 clat percentiles (usec): 00:28:58.487 | 1.00th=[ 5866], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9110], 00:28:58.487 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10683], 60.00th=[11076], 00:28:58.487 | 70.00th=[11863], 80.00th=[13042], 90.00th=[15008], 95.00th=[16712], 00:28:58.487 | 99.00th=[19268], 99.50th=[19530], 99.90th=[19792], 99.95th=[20055], 00:28:58.487 | 99.99th=[20841] 00:28:58.487 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:28:58.487 slat (usec): min=6, max=7884, avg=68.85, stdev=361.90 00:28:58.487 clat (usec): min=2946, max=19875, avg=9546.81, stdev=1925.53 00:28:58.487 lat (usec): min=2963, max=19888, avg=9615.66, stdev=1960.29 00:28:58.487 clat percentiles (usec): 00:28:58.488 | 1.00th=[ 4113], 5.00th=[ 5145], 10.00th=[ 6325], 20.00th=[ 8455], 00:28:58.488 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10421], 00:28:58.488 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:28:58.488 | 99.00th=[11863], 99.50th=[14091], 99.90th=[19268], 99.95th=[19792], 00:28:58.488 | 99.99th=[19792] 00:28:58.488 bw ( KiB/s): min=24576, max=24576, per=35.68%, avg=24576.00, stdev= 0.00, samples=2 00:28:58.488 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:28:58.488 lat (msec) : 4=0.40%, 10=36.92%, 20=62.66%, 50=0.02% 00:28:58.488 cpu : usr=5.18%, sys=15.14%, ctx=798, majf=0, minf=13 00:28:58.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:28:58.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:58.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:58.488 issued rwts: total=6127,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:58.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:58.488 job2: (groupid=0, jobs=1): err= 0: pid=70540: Wed Nov 20 05:40:18 2024 00:28:58.488 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:28:58.488 slat (usec): min=7, max=3879, avg=94.81, stdev=442.24 00:28:58.488 clat (usec): min=8674, max=16437, avg=12233.37, stdev=1247.71 00:28:58.488 lat (usec): min=8695, max=19376, avg=12328.18, stdev=1266.03 00:28:58.488 clat percentiles (usec): 00:28:58.488 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10552], 20.00th=[11207], 00:28:58.488 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12387], 60.00th=[12518], 00:28:58.488 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[14353], 00:28:58.488 | 99.00th=[15139], 99.50th=[15664], 99.90th=[15926], 99.95th=[16188], 00:28:58.488 | 99.99th=[16450] 00:28:58.488 write: IOPS=5480, BW=21.4MiB/s (22.4MB/s)(21.4MiB/1001msec); 0 zone resets 00:28:58.488 slat (usec): min=11, max=3328, avg=85.29, stdev=358.90 00:28:58.488 clat (usec): min=939, max=16291, avg=11663.64, stdev=1494.17 00:28:58.488 lat (usec): min=958, max=16313, avg=11748.93, stdev=1478.08 00:28:58.488 clat percentiles (usec): 00:28:58.488 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[11076], 00:28:58.488 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:28:58.488 | 70.00th=[12256], 80.00th=[12518], 90.00th=[13173], 95.00th=[13566], 00:28:58.488 | 99.00th=[15270], 99.50th=[15533], 99.90th=[16188], 99.95th=[16319], 00:28:58.488 | 99.99th=[16319] 00:28:58.488 bw ( KiB/s): min=20480, max=20480, per=29.73%, avg=20480.00, stdev= 0.00, samples=1 00:28:58.488 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:28:58.488 lat (usec) : 1000=0.03% 00:28:58.488 lat (msec) : 2=0.08%, 4=0.04%, 10=8.12%, 20=91.74% 00:28:58.488 cpu : usr=5.00%, sys=15.00%, ctx=562, majf=0, minf=13 00:28:58.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:58.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:58.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:58.488 issued rwts: total=5120,5486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:58.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:58.488 job3: (groupid=0, jobs=1): err= 0: pid=70541: Wed Nov 20 05:40:18 2024 00:28:58.488 read: IOPS=2876, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1005msec) 00:28:58.488 slat (usec): min=4, max=20209, avg=190.00, stdev=1187.59 00:28:58.488 clat (usec): min=4373, max=72018, avg=20475.58, stdev=11815.55 00:28:58.488 lat (usec): min=4387, max=72041, avg=20665.59, stdev=11939.38 00:28:58.488 clat percentiles (usec): 00:28:58.488 | 1.00th=[ 6390], 5.00th=[10683], 10.00th=[11076], 20.00th=[12911], 00:28:58.488 | 30.00th=[13435], 40.00th=[14615], 50.00th=[16319], 60.00th=[21103], 00:28:58.488 | 70.00th=[22938], 80.00th=[24249], 90.00th=[31589], 95.00th=[44827], 00:28:58.488 | 99.00th=[68682], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:28:58.488 | 99.99th=[71828] 00:28:58.488 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:28:58.488 slat (usec): min=6, max=19738, avg=138.26, stdev=777.24 00:28:58.488 clat (usec): min=4668, max=71944, avg=22163.31, stdev=11498.48 00:28:58.488 lat (usec): min=4694, max=71958, avg=22301.57, stdev=11545.59 00:28:58.488 clat percentiles (usec): 00:28:58.488 | 1.00th=[ 6063], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[11994], 00:28:58.488 | 30.00th=[18482], 40.00th=[20841], 50.00th=[22414], 60.00th=[22938], 00:28:58.488 | 70.00th=[23462], 80.00th=[24249], 90.00th=[30802], 95.00th=[54789], 00:28:58.488 | 99.00th=[61604], 99.50th=[61604], 99.90th=[69731], 99.95th=[71828], 00:28:58.488 | 99.99th=[71828] 00:28:58.488 bw ( KiB/s): min=11280, max=13322, per=17.86%, avg=12301.00, stdev=1443.91, samples=2 00:28:58.488 iops : min= 2820, max= 3330, avg=3075.00, stdev=360.62, samples=2 00:28:58.488 lat (msec) : 10=3.92%, 20=42.78%, 50=48.01%, 100=5.28% 00:28:58.488 cpu : usr=2.49%, sys=8.37%, ctx=420, majf=0, minf=15 00:28:58.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:28:58.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:58.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:58.488 issued rwts: total=2891,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:58.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:58.488 00:28:58.488 Run status group 0 (all jobs): 00:28:58.488 READ: bw=64.3MiB/s (67.5MB/s), 9.86MiB/s-23.8MiB/s (10.3MB/s-25.0MB/s), io=65.2MiB (68.4MB), run=1001-1014msec 00:28:58.488 WRITE: bw=67.3MiB/s (70.5MB/s), 10.6MiB/s-23.9MiB/s (11.1MB/s-25.0MB/s), io=68.2MiB (71.5MB), run=1001-1014msec 00:28:58.488 00:28:58.488 Disk stats (read/write): 00:28:58.488 nvme0n1: ios=2098/2183, merge=0/0, ticks=37801/61681, in_queue=99482, util=87.27% 00:28:58.488 nvme0n2: ios=5166/5215, merge=0/0, ticks=53018/46941, in_queue=99959, util=87.98% 00:28:58.488 nvme0n3: ios=4329/4608, merge=0/0, ticks=16385/15201, in_queue=31586, util=88.88% 00:28:58.488 nvme0n4: ios=2560/2591, merge=0/0, ticks=49187/53145, in_queue=102332, util=89.42% 00:28:58.488 05:40:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:28:58.488 05:40:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70554 00:28:58.488 05:40:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:28:58.488 05:40:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:28:58.488 [global] 00:28:58.488 thread=1 00:28:58.488 invalidate=1 00:28:58.488 rw=read 00:28:58.488 time_based=1 00:28:58.488 runtime=10 00:28:58.488 ioengine=libaio 00:28:58.488 direct=1 00:28:58.488 bs=4096 00:28:58.488 iodepth=1 00:28:58.488 norandommap=1 00:28:58.488 numjobs=1 00:28:58.488 00:28:58.488 [job0] 00:28:58.488 filename=/dev/nvme0n1 00:28:58.488 [job1] 00:28:58.488 filename=/dev/nvme0n2 00:28:58.488 [job2] 00:28:58.488 filename=/dev/nvme0n3 00:28:58.488 [job3] 00:28:58.488 filename=/dev/nvme0n4 00:28:58.488 Could not set queue depth (nvme0n1) 00:28:58.488 Could not set queue depth (nvme0n2) 00:28:58.488 Could not set queue depth (nvme0n3) 00:28:58.488 Could not set queue depth (nvme0n4) 00:28:58.488 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:58.488 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:58.488 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:58.488 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:58.488 fio-3.35 00:28:58.488 Starting 4 threads 00:29:01.783 05:40:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:29:01.783 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=41447424, buflen=4096 00:29:01.783 fio: pid=70602, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:01.783 05:40:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:29:02.042 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=54665216, buflen=4096 00:29:02.042 fio: pid=70601, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:02.042 05:40:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:02.042 05:40:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:29:02.301 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=51818496, buflen=4096 00:29:02.301 fio: pid=70599, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:02.301 05:40:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:02.301 05:40:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:29:02.561 fio: pid=70600, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:02.561 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=28811264, buflen=4096 00:29:02.561 00:29:02.561 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70599: Wed Nov 20 05:40:22 2024 00:29:02.561 read: IOPS=3551, BW=13.9MiB/s (14.5MB/s)(49.4MiB/3562msec) 00:29:02.561 slat (usec): min=6, max=13892, avg=15.10, stdev=187.49 00:29:02.561 clat (usec): min=111, max=3142, avg=265.37, stdev=71.24 00:29:02.561 lat (usec): min=127, max=14129, avg=280.47, stdev=200.34 00:29:02.561 clat percentiles (usec): 00:29:02.561 | 1.00th=[ 133], 5.00th=[ 147], 10.00th=[ 159], 20.00th=[ 241], 00:29:02.561 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:29:02.561 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 322], 95.00th=[ 343], 00:29:02.561 | 99.00th=[ 388], 99.50th=[ 429], 99.90th=[ 857], 99.95th=[ 1004], 00:29:02.561 | 99.99th=[ 2442] 00:29:02.561 bw ( KiB/s): min=12640, max=14243, per=21.49%, avg=13488.50, stdev=632.93, samples=6 00:29:02.561 iops : min= 3160, max= 3560, avg=3372.00, stdev=158.05, samples=6 00:29:02.561 lat (usec) : 250=24.93%, 500=74.76%, 750=0.18%, 1000=0.06% 00:29:02.561 lat (msec) : 2=0.03%, 4=0.02% 00:29:02.561 cpu : usr=0.62%, sys=3.88%, ctx=12662, majf=0, minf=1 00:29:02.561 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:02.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.561 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.561 issued rwts: total=12652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.561 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:02.561 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70600: Wed Nov 20 05:40:22 2024 00:29:02.561 read: IOPS=6171, BW=24.1MiB/s (25.3MB/s)(91.5MiB/3795msec) 00:29:02.561 slat (usec): min=8, max=10837, avg=12.64, stdev=137.52 00:29:02.561 clat (usec): min=71, max=2758, avg=148.58, stdev=38.96 00:29:02.561 lat (usec): min=101, max=11160, avg=161.22, stdev=143.67 00:29:02.561 clat percentiles (usec): 00:29:02.561 | 1.00th=[ 104], 5.00th=[ 115], 10.00th=[ 127], 20.00th=[ 137], 00:29:02.561 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:29:02.561 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 182], 00:29:02.561 | 99.00th=[ 212], 99.50th=[ 229], 99.90th=[ 310], 99.95th=[ 519], 00:29:02.561 | 99.99th=[ 1991] 00:29:02.561 bw ( KiB/s): min=23152, max=24848, per=38.75%, avg=24313.43, stdev=623.73, samples=7 00:29:02.561 iops : min= 5788, max= 6212, avg=6078.29, stdev=155.90, samples=7 00:29:02.561 lat (usec) : 100=0.37%, 250=99.43%, 500=0.15%, 750=0.01%, 1000=0.01% 00:29:02.561 lat (msec) : 2=0.03%, 4=0.01% 00:29:02.561 cpu : usr=1.08%, sys=5.69%, ctx=23435, majf=0, minf=2 00:29:02.561 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:02.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.561 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.561 issued rwts: total=23419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.561 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:02.561 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70601: Wed Nov 20 05:40:22 2024 00:29:02.561 read: IOPS=3990, BW=15.6MiB/s (16.3MB/s)(52.1MiB/3345msec) 00:29:02.561 slat (usec): min=6, max=11763, avg=14.81, stdev=129.52 00:29:02.561 clat (usec): min=144, max=4137, avg=234.69, stdev=102.58 00:29:02.561 lat (usec): min=153, max=12017, avg=249.50, stdev=168.27 00:29:02.561 clat percentiles (usec): 00:29:02.561 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:29:02.561 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 198], 00:29:02.561 | 70.00th=[ 227], 80.00th=[ 338], 90.00th=[ 388], 95.00th=[ 420], 00:29:02.561 | 99.00th=[ 482], 99.50th=[ 506], 99.90th=[ 717], 99.95th=[ 922], 00:29:02.561 | 99.99th=[ 3359] 00:29:02.561 bw ( KiB/s): min=10000, max=21088, per=25.55%, avg=16035.83, stdev=5284.62, samples=6 00:29:02.561 iops : min= 2500, max= 5272, avg=4008.83, stdev=1321.03, samples=6 00:29:02.561 lat (usec) : 250=74.20%, 500=25.22%, 750=0.49%, 1000=0.03% 00:29:02.561 lat (msec) : 2=0.03%, 4=0.01%, 10=0.01% 00:29:02.561 cpu : usr=1.11%, sys=4.67%, ctx=13356, majf=0, minf=1 00:29:02.561 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:02.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.561 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.561 issued rwts: total=13347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.561 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:02.561 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70602: Wed Nov 20 05:40:22 2024 00:29:02.561 read: IOPS=3305, BW=12.9MiB/s (13.5MB/s)(39.5MiB/3062msec) 00:29:02.561 slat (nsec): min=9272, max=86512, avg=17610.38, stdev=5194.16 00:29:02.561 clat (usec): min=137, max=7668, avg=283.37, stdev=179.42 00:29:02.561 lat (usec): min=150, max=7688, avg=300.98, stdev=179.61 00:29:02.561 clat percentiles (usec): 00:29:02.561 | 1.00th=[ 159], 5.00th=[ 231], 10.00th=[ 243], 20.00th=[ 253], 00:29:02.561 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:29:02.561 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 338], 00:29:02.561 | 99.00th=[ 379], 99.50th=[ 469], 99.90th=[ 3523], 99.95th=[ 4752], 00:29:02.561 | 99.99th=[ 7373] 00:29:02.561 bw ( KiB/s): min=11872, max=13776, per=20.94%, avg=13140.83, stdev=703.02, samples=6 00:29:02.561 iops : min= 2968, max= 3444, avg=3285.17, stdev=175.74, samples=6 00:29:02.561 lat (usec) : 250=16.38%, 500=83.16%, 750=0.18%, 1000=0.09% 00:29:02.561 lat (msec) : 2=0.04%, 4=0.07%, 10=0.07% 00:29:02.561 cpu : usr=1.24%, sys=4.93%, ctx=10120, majf=0, minf=2 00:29:02.561 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:02.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.561 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.561 issued rwts: total=10120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.561 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:02.561 00:29:02.561 Run status group 0 (all jobs): 00:29:02.561 READ: bw=61.3MiB/s (64.3MB/s), 12.9MiB/s-24.1MiB/s (13.5MB/s-25.3MB/s), io=233MiB (244MB), run=3062-3795msec 00:29:02.561 00:29:02.561 Disk stats (read/write): 00:29:02.561 nvme0n1: ios=11460/0, merge=0/0, ticks=3206/0, in_queue=3206, util=95.25% 00:29:02.561 nvme0n2: ios=21909/0, merge=0/0, ticks=3345/0, in_queue=3345, util=95.31% 00:29:02.561 nvme0n3: ios=12452/0, merge=0/0, ticks=2959/0, in_queue=2959, util=96.67% 00:29:02.561 nvme0n4: ios=9383/0, merge=0/0, ticks=2694/0, in_queue=2694, util=96.41% 00:29:02.561 05:40:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:02.561 05:40:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:29:02.820 05:40:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:02.820 05:40:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:29:03.080 05:40:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:03.080 05:40:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:29:03.338 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:03.338 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:29:03.598 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:03.598 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:29:03.894 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:29:03.894 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70554 00:29:03.894 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:29:03.894 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:03.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:03.894 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:03.894 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:29:03.894 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:29:03.894 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:03.894 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:29:03.894 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:03.894 nvmf hotplug test: fio failed as expected 00:29:03.894 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:29:03.894 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:29:03.894 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:29:03.894 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:04.171 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:29:04.171 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:29:04.171 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:29:04.171 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:29:04.171 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:29:04.171 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:04.171 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:29:04.171 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:04.171 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:29:04.171 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.171 05:40:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:04.171 rmmod nvme_tcp 00:29:04.171 rmmod nvme_fabrics 00:29:04.171 rmmod nvme_keyring 00:29:04.171 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.171 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:29:04.171 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:29:04.171 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 70070 ']' 00:29:04.171 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 70070 00:29:04.171 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 70070 ']' 00:29:04.171 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 70070 00:29:04.171 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:29:04.171 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:04.171 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70070 00:29:04.171 killing process with pid 70070 00:29:04.171 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:04.171 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:04.171 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70070' 00:29:04.171 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 70070 00:29:04.171 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 70070 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:04.431 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:29:04.690 ************************************ 00:29:04.690 END TEST nvmf_fio_target 00:29:04.690 ************************************ 00:29:04.690 00:29:04.690 real 0m19.816s 00:29:04.690 user 1m14.763s 00:29:04.690 sys 0m9.118s 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:29:04.690 ************************************ 00:29:04.690 START TEST nvmf_bdevio 00:29:04.690 ************************************ 00:29:04.690 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:29:04.950 * Looking for test storage... 00:29:04.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:04.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.951 --rc genhtml_branch_coverage=1 00:29:04.951 --rc genhtml_function_coverage=1 00:29:04.951 --rc genhtml_legend=1 00:29:04.951 --rc geninfo_all_blocks=1 00:29:04.951 --rc geninfo_unexecuted_blocks=1 00:29:04.951 00:29:04.951 ' 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:04.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.951 --rc genhtml_branch_coverage=1 00:29:04.951 --rc genhtml_function_coverage=1 00:29:04.951 --rc genhtml_legend=1 00:29:04.951 --rc geninfo_all_blocks=1 00:29:04.951 --rc geninfo_unexecuted_blocks=1 00:29:04.951 00:29:04.951 ' 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:04.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.951 --rc genhtml_branch_coverage=1 00:29:04.951 --rc genhtml_function_coverage=1 00:29:04.951 --rc genhtml_legend=1 00:29:04.951 --rc geninfo_all_blocks=1 00:29:04.951 --rc geninfo_unexecuted_blocks=1 00:29:04.951 00:29:04.951 ' 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:04.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.951 --rc genhtml_branch_coverage=1 00:29:04.951 --rc genhtml_function_coverage=1 00:29:04.951 --rc genhtml_legend=1 00:29:04.951 --rc geninfo_all_blocks=1 00:29:04.951 --rc geninfo_unexecuted_blocks=1 00:29:04.951 00:29:04.951 ' 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.951 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:04.952 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:04.952 Cannot find device "nvmf_init_br" 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:04.952 Cannot find device "nvmf_init_br2" 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:04.952 Cannot find device "nvmf_tgt_br" 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:04.952 Cannot find device "nvmf_tgt_br2" 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:29:04.952 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:05.211 Cannot find device "nvmf_init_br" 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:05.212 Cannot find device "nvmf_init_br2" 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:05.212 Cannot find device "nvmf_tgt_br" 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:05.212 Cannot find device "nvmf_tgt_br2" 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:05.212 Cannot find device "nvmf_br" 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:05.212 Cannot find device "nvmf_init_if" 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:05.212 Cannot find device "nvmf_init_if2" 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:05.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:05.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:05.212 05:40:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:05.212 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:05.471 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:05.471 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:29:05.471 00:29:05.471 --- 10.0.0.3 ping statistics --- 00:29:05.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.471 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:05.471 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:05.471 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:29:05.471 00:29:05.471 --- 10.0.0.4 ping statistics --- 00:29:05.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.471 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:05.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:29:05.471 00:29:05.471 --- 10.0.0.1 ping statistics --- 00:29:05.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.471 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:05.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:29:05.471 00:29:05.471 --- 10.0.0.2 ping statistics --- 00:29:05.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.471 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:29:05.471 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:05.472 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:05.472 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:05.472 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=70986 00:29:05.472 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:29:05.472 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 70986 00:29:05.472 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 70986 ']' 00:29:05.472 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.472 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:05.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.472 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.472 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:05.472 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:05.472 [2024-11-20 05:40:25.321564] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:05.472 [2024-11-20 05:40:25.321711] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.731 [2024-11-20 05:40:25.480972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:05.731 [2024-11-20 05:40:25.530076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.732 [2024-11-20 05:40:25.530130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.732 [2024-11-20 05:40:25.530140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.732 [2024-11-20 05:40:25.530149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.732 [2024-11-20 05:40:25.530157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.732 [2024-11-20 05:40:25.531124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:05.732 [2024-11-20 05:40:25.531267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:05.732 [2024-11-20 05:40:25.531657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:05.732 [2024-11-20 05:40:25.531660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:05.991 [2024-11-20 05:40:25.700565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:05.991 Malloc0 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:05.991 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:05.992 [2024-11-20 05:40:25.766138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:05.992 { 00:29:05.992 "params": { 00:29:05.992 "name": "Nvme$subsystem", 00:29:05.992 "trtype": "$TEST_TRANSPORT", 00:29:05.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.992 "adrfam": "ipv4", 00:29:05.992 "trsvcid": "$NVMF_PORT", 00:29:05.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.992 "hdgst": ${hdgst:-false}, 00:29:05.992 "ddgst": ${ddgst:-false} 00:29:05.992 }, 00:29:05.992 "method": "bdev_nvme_attach_controller" 00:29:05.992 } 00:29:05.992 EOF 00:29:05.992 )") 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:29:05.992 05:40:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:05.992 "params": { 00:29:05.992 "name": "Nvme1", 00:29:05.992 "trtype": "tcp", 00:29:05.992 "traddr": "10.0.0.3", 00:29:05.992 "adrfam": "ipv4", 00:29:05.992 "trsvcid": "4420", 00:29:05.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:05.992 "hdgst": false, 00:29:05.992 "ddgst": false 00:29:05.992 }, 00:29:05.992 "method": "bdev_nvme_attach_controller" 00:29:05.992 }' 00:29:05.992 [2024-11-20 05:40:25.813395] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:05.992 [2024-11-20 05:40:25.813476] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71028 ] 00:29:06.251 [2024-11-20 05:40:25.966022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:06.251 [2024-11-20 05:40:26.032245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.251 [2024-11-20 05:40:26.032346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.251 [2024-11-20 05:40:26.032349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.509 I/O targets: 00:29:06.509 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:29:06.509 00:29:06.509 00:29:06.509 CUnit - A unit testing framework for C - Version 2.1-3 00:29:06.509 http://cunit.sourceforge.net/ 00:29:06.509 00:29:06.509 00:29:06.509 Suite: bdevio tests on: Nvme1n1 00:29:06.509 Test: blockdev write read block ...passed 00:29:06.509 Test: blockdev write zeroes read block ...passed 00:29:06.509 Test: blockdev write zeroes read no split ...passed 00:29:06.509 Test: blockdev write zeroes read split ...passed 00:29:06.509 Test: blockdev write zeroes read split partial ...passed 00:29:06.509 Test: blockdev reset ...[2024-11-20 05:40:26.324746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:06.509 [2024-11-20 05:40:26.324848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac5f50 (9): Bad file descriptor 00:29:06.509 [2024-11-20 05:40:26.336564] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:29:06.509 passed 00:29:06.509 Test: blockdev write read 8 blocks ...passed 00:29:06.509 Test: blockdev write read size > 128k ...passed 00:29:06.509 Test: blockdev write read invalid size ...passed 00:29:06.509 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:06.509 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:06.509 Test: blockdev write read max offset ...passed 00:29:06.768 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:06.768 Test: blockdev writev readv 8 blocks ...passed 00:29:06.768 Test: blockdev writev readv 30 x 1block ...passed 00:29:06.768 Test: blockdev writev readv block ...passed 00:29:06.768 Test: blockdev writev readv size > 128k ...passed 00:29:06.768 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:06.768 Test: blockdev comparev and writev ...[2024-11-20 05:40:26.510506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:06.768 [2024-11-20 05:40:26.510563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.768 [2024-11-20 05:40:26.510580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:06.768 [2024-11-20 05:40:26.510607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.768 [2024-11-20 05:40:26.510872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:06.768 [2024-11-20 05:40:26.510969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:06.768 [2024-11-20 05:40:26.510987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:06.768 [2024-11-20 05:40:26.510998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:06.768 [2024-11-20 05:40:26.511367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:06.768 [2024-11-20 05:40:26.511391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:06.768 [2024-11-20 05:40:26.511407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:06.768 [2024-11-20 05:40:26.511418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:06.768 [2024-11-20 05:40:26.511837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:06.768 [2024-11-20 05:40:26.511873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:06.768 [2024-11-20 05:40:26.511890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:06.768 [2024-11-20 05:40:26.511901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:06.768 passed 00:29:06.768 Test: blockdev nvme passthru rw ...passed 00:29:06.768 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:40:26.593764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:06.768 [2024-11-20 05:40:26.593800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:06.768 [2024-11-20 05:40:26.593911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:06.769 [2024-11-20 05:40:26.593923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.769 [2024-11-20 05:40:26.594481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:06.769 [2024-11-20 05:40:26.594503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.769 [2024-11-20 05:40:26.594601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:06.769 [2024-11-20 05:40:26.594614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:06.769 passed 00:29:06.769 Test: blockdev nvme admin passthru ...passed 00:29:06.769 Test: blockdev copy ...passed 00:29:06.769 00:29:06.769 Run Summary: Type Total Ran Passed Failed Inactive 00:29:06.769 suites 1 1 n/a 0 0 00:29:06.769 tests 23 23 23 0 0 00:29:06.769 asserts 152 152 152 0 n/a 00:29:06.769 00:29:06.769 Elapsed time = 0.872 seconds 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:07.028 rmmod nvme_tcp 00:29:07.028 rmmod nvme_fabrics 00:29:07.028 rmmod nvme_keyring 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 70986 ']' 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 70986 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 70986 ']' 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 70986 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:07.028 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70986 00:29:07.287 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:29:07.287 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:29:07.287 killing process with pid 70986 00:29:07.287 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70986' 00:29:07.287 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 70986 00:29:07.287 05:40:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 70986 00:29:07.287 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:07.287 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:07.287 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:07.287 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:29:07.287 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:29:07.287 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:07.287 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:29:07.287 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:07.287 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:07.287 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:07.287 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:07.287 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:07.288 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:29:07.547 00:29:07.547 real 0m2.830s 00:29:07.547 user 0m8.445s 00:29:07.547 sys 0m0.945s 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:07.547 ************************************ 00:29:07.547 END TEST nvmf_bdevio 00:29:07.547 ************************************ 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:29:07.547 00:29:07.547 real 3m32.635s 00:29:07.547 user 10m51.633s 00:29:07.547 sys 1m12.112s 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:07.547 05:40:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:29:07.547 ************************************ 00:29:07.547 END TEST nvmf_target_core 00:29:07.547 ************************************ 00:29:07.807 05:40:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:29:07.807 05:40:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:07.807 05:40:27 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:07.807 05:40:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:07.807 ************************************ 00:29:07.807 START TEST nvmf_target_extra 00:29:07.807 ************************************ 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:29:07.807 * Looking for test storage... 00:29:07.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:07.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.807 --rc genhtml_branch_coverage=1 00:29:07.807 --rc genhtml_function_coverage=1 00:29:07.807 --rc genhtml_legend=1 00:29:07.807 --rc geninfo_all_blocks=1 00:29:07.807 --rc geninfo_unexecuted_blocks=1 00:29:07.807 00:29:07.807 ' 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:07.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.807 --rc genhtml_branch_coverage=1 00:29:07.807 --rc genhtml_function_coverage=1 00:29:07.807 --rc genhtml_legend=1 00:29:07.807 --rc geninfo_all_blocks=1 00:29:07.807 --rc geninfo_unexecuted_blocks=1 00:29:07.807 00:29:07.807 ' 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:07.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.807 --rc genhtml_branch_coverage=1 00:29:07.807 --rc genhtml_function_coverage=1 00:29:07.807 --rc genhtml_legend=1 00:29:07.807 --rc geninfo_all_blocks=1 00:29:07.807 --rc geninfo_unexecuted_blocks=1 00:29:07.807 00:29:07.807 ' 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:07.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.807 --rc genhtml_branch_coverage=1 00:29:07.807 --rc genhtml_function_coverage=1 00:29:07.807 --rc genhtml_legend=1 00:29:07.807 --rc geninfo_all_blocks=1 00:29:07.807 --rc geninfo_unexecuted_blocks=1 00:29:07.807 00:29:07.807 ' 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.807 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:08.068 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:08.068 ************************************ 00:29:08.068 START TEST nvmf_example 00:29:08.068 ************************************ 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:29:08.068 * Looking for test storage... 00:29:08.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:29:08.068 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:08.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.069 --rc genhtml_branch_coverage=1 00:29:08.069 --rc genhtml_function_coverage=1 00:29:08.069 --rc genhtml_legend=1 00:29:08.069 --rc geninfo_all_blocks=1 00:29:08.069 --rc geninfo_unexecuted_blocks=1 00:29:08.069 00:29:08.069 ' 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:08.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.069 --rc genhtml_branch_coverage=1 00:29:08.069 --rc genhtml_function_coverage=1 00:29:08.069 --rc genhtml_legend=1 00:29:08.069 --rc geninfo_all_blocks=1 00:29:08.069 --rc geninfo_unexecuted_blocks=1 00:29:08.069 00:29:08.069 ' 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:08.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.069 --rc genhtml_branch_coverage=1 00:29:08.069 --rc genhtml_function_coverage=1 00:29:08.069 --rc genhtml_legend=1 00:29:08.069 --rc geninfo_all_blocks=1 00:29:08.069 --rc geninfo_unexecuted_blocks=1 00:29:08.069 00:29:08.069 ' 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:08.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.069 --rc genhtml_branch_coverage=1 00:29:08.069 --rc genhtml_function_coverage=1 00:29:08.069 --rc genhtml_legend=1 00:29:08.069 --rc geninfo_all_blocks=1 00:29:08.069 --rc geninfo_unexecuted_blocks=1 00:29:08.069 00:29:08.069 ' 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:08.069 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:08.069 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:08.070 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:08.070 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:08.070 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.070 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:08.070 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:08.070 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:08.070 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:08.070 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:08.070 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:08.070 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.070 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:08.070 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:08.070 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:08.070 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:08.329 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:08.329 Cannot find device "nvmf_init_br" 00:29:08.329 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:29:08.329 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:08.329 Cannot find device "nvmf_init_br2" 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:08.329 Cannot find device "nvmf_tgt_br" 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:08.329 Cannot find device "nvmf_tgt_br2" 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:08.329 Cannot find device "nvmf_init_br" 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:08.329 Cannot find device "nvmf_init_br2" 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:08.329 Cannot find device "nvmf_tgt_br" 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:08.329 Cannot find device "nvmf_tgt_br2" 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:08.329 Cannot find device "nvmf_br" 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:08.329 Cannot find device "nvmf_init_if" 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:08.329 Cannot find device "nvmf_init_if2" 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:29:08.329 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:08.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:08.330 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:08.330 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:08.589 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:08.589 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:08.589 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:08.589 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:08.589 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:08.589 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:08.589 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:08.589 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:08.590 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:08.590 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:29:08.590 00:29:08.590 --- 10.0.0.3 ping statistics --- 00:29:08.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.590 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:08.590 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:08.590 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:29:08.590 00:29:08.590 --- 10.0.0.4 ping statistics --- 00:29:08.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.590 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:08.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:29:08.590 00:29:08.590 --- 10.0.0.1 ping statistics --- 00:29:08.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.590 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:08.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:29:08.590 00:29:08.590 --- 10.0.0.2 ping statistics --- 00:29:08.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.590 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=71317 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 71317 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 71317 ']' 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:08.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:08.590 05:40:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:29:09.968 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:19.982 Initializing NVMe Controllers 00:29:19.982 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.982 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:19.982 Initialization complete. Launching workers. 00:29:19.982 ======================================================== 00:29:19.982 Latency(us) 00:29:19.982 Device Information : IOPS MiB/s Average min max 00:29:19.982 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16669.04 65.11 3839.27 606.21 21620.56 00:29:19.982 ======================================================== 00:29:19.982 Total : 16669.04 65.11 3839.27 606.21 21620.56 00:29:19.982 00:29:20.241 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:29:20.241 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:29:20.241 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:20.241 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:29:20.241 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:20.241 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:29:20.241 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:20.241 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:20.241 rmmod nvme_tcp 00:29:20.241 rmmod nvme_fabrics 00:29:20.241 rmmod nvme_keyring 00:29:20.241 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:20.241 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:29:20.241 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:29:20.241 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 71317 ']' 00:29:20.241 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 71317 00:29:20.241 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 71317 ']' 00:29:20.241 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 71317 00:29:20.241 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:29:20.241 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:20.241 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71317 00:29:20.241 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:29:20.241 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:29:20.241 killing process with pid 71317 00:29:20.241 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71317' 00:29:20.241 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 71317 00:29:20.241 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 71317 00:29:20.500 nvmf threads initialize successfully 00:29:20.500 bdev subsystem init successfully 00:29:20.500 created a nvmf target service 00:29:20.500 create targets's poll groups done 00:29:20.500 all subsystems of target started 00:29:20.500 nvmf target is running 00:29:20.500 all subsystems of target stopped 00:29:20.500 destroy targets's poll groups done 00:29:20.500 destroyed the nvmf target service 00:29:20.500 bdev subsystem finish successfully 00:29:20.500 nvmf threads destroy successfully 00:29:20.500 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:20.500 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:20.500 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:20.500 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:29:20.500 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:29:20.500 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:20.500 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:29:20.500 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:20.500 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:20.500 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:20.500 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:20.500 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:20.500 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:20.500 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:20.501 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:20.501 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:20.501 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:20.501 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:20.501 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:20.501 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:20.501 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:29:20.760 00:29:20.760 real 0m12.768s 00:29:20.760 user 0m44.586s 00:29:20.760 sys 0m2.550s 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:20.760 ************************************ 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:29:20.760 END TEST nvmf_example 00:29:20.760 ************************************ 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:20.760 ************************************ 00:29:20.760 START TEST nvmf_filesystem 00:29:20.760 ************************************ 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:29:20.760 * Looking for test storage... 00:29:20.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:20.760 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:21.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.021 --rc genhtml_branch_coverage=1 00:29:21.021 --rc genhtml_function_coverage=1 00:29:21.021 --rc genhtml_legend=1 00:29:21.021 --rc geninfo_all_blocks=1 00:29:21.021 --rc geninfo_unexecuted_blocks=1 00:29:21.021 00:29:21.021 ' 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:21.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.021 --rc genhtml_branch_coverage=1 00:29:21.021 --rc genhtml_function_coverage=1 00:29:21.021 --rc genhtml_legend=1 00:29:21.021 --rc geninfo_all_blocks=1 00:29:21.021 --rc geninfo_unexecuted_blocks=1 00:29:21.021 00:29:21.021 ' 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:21.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.021 --rc genhtml_branch_coverage=1 00:29:21.021 --rc genhtml_function_coverage=1 00:29:21.021 --rc genhtml_legend=1 00:29:21.021 --rc geninfo_all_blocks=1 00:29:21.021 --rc geninfo_unexecuted_blocks=1 00:29:21.021 00:29:21.021 ' 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:21.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.021 --rc genhtml_branch_coverage=1 00:29:21.021 --rc genhtml_function_coverage=1 00:29:21.021 --rc genhtml_legend=1 00:29:21.021 --rc geninfo_all_blocks=1 00:29:21.021 --rc geninfo_unexecuted_blocks=1 00:29:21.021 00:29:21.021 ' 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:29:21.021 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:29:21.022 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:29:21.022 #define SPDK_CONFIG_H 00:29:21.022 #define SPDK_CONFIG_AIO_FSDEV 1 00:29:21.022 #define SPDK_CONFIG_APPS 1 00:29:21.022 #define SPDK_CONFIG_ARCH native 00:29:21.022 #undef SPDK_CONFIG_ASAN 00:29:21.022 #define SPDK_CONFIG_AVAHI 1 00:29:21.022 #undef SPDK_CONFIG_CET 00:29:21.022 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:29:21.022 #define SPDK_CONFIG_COVERAGE 1 00:29:21.022 #define SPDK_CONFIG_CROSS_PREFIX 00:29:21.022 #undef SPDK_CONFIG_CRYPTO 00:29:21.022 #undef SPDK_CONFIG_CRYPTO_MLX5 00:29:21.022 #undef SPDK_CONFIG_CUSTOMOCF 00:29:21.022 #undef SPDK_CONFIG_DAOS 00:29:21.022 #define SPDK_CONFIG_DAOS_DIR 00:29:21.022 #define SPDK_CONFIG_DEBUG 1 00:29:21.022 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:29:21.022 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:29:21.022 #define SPDK_CONFIG_DPDK_INC_DIR 00:29:21.022 #define SPDK_CONFIG_DPDK_LIB_DIR 00:29:21.022 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:29:21.022 #undef SPDK_CONFIG_DPDK_UADK 00:29:21.022 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:29:21.022 #define SPDK_CONFIG_EXAMPLES 1 00:29:21.022 #undef SPDK_CONFIG_FC 00:29:21.022 #define SPDK_CONFIG_FC_PATH 00:29:21.022 #define SPDK_CONFIG_FIO_PLUGIN 1 00:29:21.022 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:29:21.022 #define SPDK_CONFIG_FSDEV 1 00:29:21.022 #undef SPDK_CONFIG_FUSE 00:29:21.022 #undef SPDK_CONFIG_FUZZER 00:29:21.022 #define SPDK_CONFIG_FUZZER_LIB 00:29:21.022 #define SPDK_CONFIG_GOLANG 1 00:29:21.022 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:29:21.022 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:29:21.022 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:29:21.022 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:29:21.022 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:29:21.022 #undef SPDK_CONFIG_HAVE_LIBBSD 00:29:21.022 #undef SPDK_CONFIG_HAVE_LZ4 00:29:21.022 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:29:21.022 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:29:21.022 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:29:21.022 #define SPDK_CONFIG_IDXD 1 00:29:21.022 #define SPDK_CONFIG_IDXD_KERNEL 1 00:29:21.022 #undef SPDK_CONFIG_IPSEC_MB 00:29:21.022 #define SPDK_CONFIG_IPSEC_MB_DIR 00:29:21.022 #define SPDK_CONFIG_ISAL 1 00:29:21.022 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:29:21.022 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:29:21.023 #define SPDK_CONFIG_LIBDIR 00:29:21.023 #undef SPDK_CONFIG_LTO 00:29:21.023 #define SPDK_CONFIG_MAX_LCORES 128 00:29:21.023 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:29:21.023 #define SPDK_CONFIG_NVME_CUSE 1 00:29:21.023 #undef SPDK_CONFIG_OCF 00:29:21.023 #define SPDK_CONFIG_OCF_PATH 00:29:21.023 #define SPDK_CONFIG_OPENSSL_PATH 00:29:21.023 #undef SPDK_CONFIG_PGO_CAPTURE 00:29:21.023 #define SPDK_CONFIG_PGO_DIR 00:29:21.023 #undef SPDK_CONFIG_PGO_USE 00:29:21.023 #define SPDK_CONFIG_PREFIX /usr/local 00:29:21.023 #undef SPDK_CONFIG_RAID5F 00:29:21.023 #undef SPDK_CONFIG_RBD 00:29:21.023 #define SPDK_CONFIG_RDMA 1 00:29:21.023 #define SPDK_CONFIG_RDMA_PROV verbs 00:29:21.023 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:29:21.023 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:29:21.023 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:29:21.023 #define SPDK_CONFIG_SHARED 1 00:29:21.023 #undef SPDK_CONFIG_SMA 00:29:21.023 #define SPDK_CONFIG_TESTS 1 00:29:21.023 #undef SPDK_CONFIG_TSAN 00:29:21.023 #define SPDK_CONFIG_UBLK 1 00:29:21.023 #define SPDK_CONFIG_UBSAN 1 00:29:21.023 #undef SPDK_CONFIG_UNIT_TESTS 00:29:21.023 #undef SPDK_CONFIG_URING 00:29:21.023 #define SPDK_CONFIG_URING_PATH 00:29:21.023 #undef SPDK_CONFIG_URING_ZNS 00:29:21.023 #define SPDK_CONFIG_USDT 1 00:29:21.023 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:29:21.023 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:29:21.023 #undef SPDK_CONFIG_VFIO_USER 00:29:21.023 #define SPDK_CONFIG_VFIO_USER_DIR 00:29:21.023 #define SPDK_CONFIG_VHOST 1 00:29:21.023 #define SPDK_CONFIG_VIRTIO 1 00:29:21.023 #undef SPDK_CONFIG_VTUNE 00:29:21.023 #define SPDK_CONFIG_VTUNE_DIR 00:29:21.023 #define SPDK_CONFIG_WERROR 1 00:29:21.023 #define SPDK_CONFIG_WPDK_DIR 00:29:21.023 #undef SPDK_CONFIG_XNVME 00:29:21.023 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:29:21.023 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:29:21.024 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j10 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 71590 ]] 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 71590 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.6hlCpx 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.6hlCpx/tests/target /tmp/spdk.6hlCpx 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda5 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=btrfs 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=13979500544 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=20314062848 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5588910080 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=devtmpfs 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4194304 00:29:21.025 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=4194304 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6256394240 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6266425344 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=2486431744 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=2506571776 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=20140032 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda5 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=btrfs 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=13979500544 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=20314062848 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5588910080 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6266286080 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6266425344 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=139264 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda2 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext4 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=840085504 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=1012768768 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=103477248 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/vda3 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=vfat 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=91617280 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=104607744 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12990464 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=1253269504 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=1253281792 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=fuse.sshfs 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=95580770304 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=105088212992 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4122009600 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:29:21.026 * Looking for test storage... 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/home 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=13979500544 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ btrfs == tmpfs ]] 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ btrfs == ramfs ]] 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ /home == / ]] 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:21.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:21.026 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.286 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:29:21.286 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:29:21.286 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:21.286 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:29:21.286 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:21.286 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:29:21.286 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:21.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.287 --rc genhtml_branch_coverage=1 00:29:21.287 --rc genhtml_function_coverage=1 00:29:21.287 --rc genhtml_legend=1 00:29:21.287 --rc geninfo_all_blocks=1 00:29:21.287 --rc geninfo_unexecuted_blocks=1 00:29:21.287 00:29:21.287 ' 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:21.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.287 --rc genhtml_branch_coverage=1 00:29:21.287 --rc genhtml_function_coverage=1 00:29:21.287 --rc genhtml_legend=1 00:29:21.287 --rc geninfo_all_blocks=1 00:29:21.287 --rc geninfo_unexecuted_blocks=1 00:29:21.287 00:29:21.287 ' 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:21.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.287 --rc genhtml_branch_coverage=1 00:29:21.287 --rc genhtml_function_coverage=1 00:29:21.287 --rc genhtml_legend=1 00:29:21.287 --rc geninfo_all_blocks=1 00:29:21.287 --rc geninfo_unexecuted_blocks=1 00:29:21.287 00:29:21.287 ' 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:21.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.287 --rc genhtml_branch_coverage=1 00:29:21.287 --rc genhtml_function_coverage=1 00:29:21.287 --rc genhtml_legend=1 00:29:21.287 --rc geninfo_all_blocks=1 00:29:21.287 --rc geninfo_unexecuted_blocks=1 00:29:21.287 00:29:21.287 ' 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:21.287 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:21.287 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:21.288 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:21.288 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:21.288 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:21.288 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.288 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:21.288 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:21.288 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:21.288 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:21.288 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:21.288 Cannot find device "nvmf_init_br" 00:29:21.288 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:29:21.288 05:40:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:21.288 Cannot find device "nvmf_init_br2" 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:21.288 Cannot find device "nvmf_tgt_br" 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:21.288 Cannot find device "nvmf_tgt_br2" 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:21.288 Cannot find device "nvmf_init_br" 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:21.288 Cannot find device "nvmf_init_br2" 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:21.288 Cannot find device "nvmf_tgt_br" 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:21.288 Cannot find device "nvmf_tgt_br2" 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:21.288 Cannot find device "nvmf_br" 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:21.288 Cannot find device "nvmf_init_if" 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:21.288 Cannot find device "nvmf_init_if2" 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:21.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:21.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:21.288 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:21.548 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:21.548 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:29:21.548 00:29:21.548 --- 10.0.0.3 ping statistics --- 00:29:21.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.548 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:21.548 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:21.548 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:29:21.548 00:29:21.548 --- 10.0.0.4 ping statistics --- 00:29:21.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.548 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:21.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:29:21.548 00:29:21.548 --- 10.0.0.1 ping statistics --- 00:29:21.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.548 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:29:21.548 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:21.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:29:21.548 00:29:21.548 --- 10.0.0.2 ping statistics --- 00:29:21.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.549 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:29:21.549 ************************************ 00:29:21.549 START TEST nvmf_filesystem_no_in_capsule 00:29:21.549 ************************************ 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71784 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71784 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 71784 ']' 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:21.549 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:21.549 [2024-11-20 05:40:41.461284] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:21.549 [2024-11-20 05:40:41.461380] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.808 [2024-11-20 05:40:41.609842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:21.808 [2024-11-20 05:40:41.656455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:21.808 [2024-11-20 05:40:41.656513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:21.808 [2024-11-20 05:40:41.656522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:21.808 [2024-11-20 05:40:41.656530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:21.808 [2024-11-20 05:40:41.656537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:21.808 [2024-11-20 05:40:41.657511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.808 [2024-11-20 05:40:41.657778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:21.808 [2024-11-20 05:40:41.657922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:21.808 [2024-11-20 05:40:41.657927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:22.067 [2024-11-20 05:40:41.817996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:22.067 Malloc1 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:22.067 [2024-11-20 05:40:41.962053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.067 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:22.326 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.326 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:29:22.326 { 00:29:22.326 "aliases": [ 00:29:22.326 "de715472-cc0f-4092-8b44-96e9c51efb91" 00:29:22.326 ], 00:29:22.326 "assigned_rate_limits": { 00:29:22.326 "r_mbytes_per_sec": 0, 00:29:22.326 "rw_ios_per_sec": 0, 00:29:22.326 "rw_mbytes_per_sec": 0, 00:29:22.326 "w_mbytes_per_sec": 0 00:29:22.326 }, 00:29:22.326 "block_size": 512, 00:29:22.326 "claim_type": "exclusive_write", 00:29:22.326 "claimed": true, 00:29:22.326 "driver_specific": {}, 00:29:22.326 "memory_domains": [ 00:29:22.326 { 00:29:22.326 "dma_device_id": "system", 00:29:22.326 "dma_device_type": 1 00:29:22.326 }, 00:29:22.326 { 00:29:22.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:22.326 "dma_device_type": 2 00:29:22.326 } 00:29:22.326 ], 00:29:22.326 "name": "Malloc1", 00:29:22.326 "num_blocks": 1048576, 00:29:22.326 "product_name": "Malloc disk", 00:29:22.326 "supported_io_types": { 00:29:22.326 "abort": true, 00:29:22.326 "compare": false, 00:29:22.326 "compare_and_write": false, 00:29:22.326 "copy": true, 00:29:22.326 "flush": true, 00:29:22.326 "get_zone_info": false, 00:29:22.326 "nvme_admin": false, 00:29:22.326 "nvme_io": false, 00:29:22.326 "nvme_io_md": false, 00:29:22.326 "nvme_iov_md": false, 00:29:22.326 "read": true, 00:29:22.326 "reset": true, 00:29:22.326 "seek_data": false, 00:29:22.326 "seek_hole": false, 00:29:22.326 "unmap": true, 00:29:22.326 "write": true, 00:29:22.326 "write_zeroes": true, 00:29:22.326 "zcopy": true, 00:29:22.326 "zone_append": false, 00:29:22.326 "zone_management": false 00:29:22.326 }, 00:29:22.326 "uuid": "de715472-cc0f-4092-8b44-96e9c51efb91", 00:29:22.326 "zoned": false 00:29:22.326 } 00:29:22.326 ]' 00:29:22.326 05:40:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:29:22.326 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:29:22.326 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:29:22.326 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:29:22.326 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:29:22.326 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:29:22.326 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:29:22.326 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:29:22.586 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:29:22.586 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:29:22.586 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:29:22.586 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:29:22.586 05:40:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:29:24.490 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:29:24.491 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:29:24.750 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:29:25.687 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:29:25.687 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:29:25.687 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:25.687 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:25.687 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:25.687 ************************************ 00:29:25.687 START TEST filesystem_ext4 00:29:25.687 ************************************ 00:29:25.688 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:29:25.688 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:29:25.688 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:29:25.688 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:29:25.688 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:29:25.688 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:29:25.688 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:29:25.688 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:29:25.688 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:29:25.688 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:29:25.688 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:29:25.688 mke2fs 1.47.0 (5-Feb-2023) 00:29:25.688 Discarding device blocks: 0/522240 done 00:29:25.688 Creating filesystem with 522240 1k blocks and 130560 inodes 00:29:25.688 Filesystem UUID: c79b6a1a-9800-4561-a943-458425c88329 00:29:25.688 Superblock backups stored on blocks: 00:29:25.688 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:29:25.688 00:29:25.688 Allocating group tables: 0/64 done 00:29:25.688 Writing inode tables: 0/64 done 00:29:25.688 Creating journal (8192 blocks): done 00:29:25.688 Writing superblocks and filesystem accounting information: 0/64 done 00:29:25.688 00:29:25.688 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:29:25.688 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:29:30.976 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:29:31.235 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:29:31.235 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:29:31.235 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:29:31.235 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:29:31.235 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:29:31.235 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 71784 00:29:31.235 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:29:31.235 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:29:31.235 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:29:31.235 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:29:31.235 00:29:31.235 real 0m5.532s 00:29:31.235 user 0m0.024s 00:29:31.235 sys 0m0.063s 00:29:31.235 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:31.235 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:29:31.235 ************************************ 00:29:31.235 END TEST filesystem_ext4 00:29:31.235 ************************************ 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:31.236 ************************************ 00:29:31.236 START TEST filesystem_btrfs 00:29:31.236 ************************************ 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:29:31.236 btrfs-progs v6.8.1 00:29:31.236 See https://btrfs.readthedocs.io for more information. 00:29:31.236 00:29:31.236 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:29:31.236 NOTE: several default settings have changed in version 5.15, please make sure 00:29:31.236 this does not affect your deployments: 00:29:31.236 - DUP for metadata (-m dup) 00:29:31.236 - enabled no-holes (-O no-holes) 00:29:31.236 - enabled free-space-tree (-R free-space-tree) 00:29:31.236 00:29:31.236 Label: (null) 00:29:31.236 UUID: fc86269f-3eab-4a89-8fd9-c3e1d16f2372 00:29:31.236 Node size: 16384 00:29:31.236 Sector size: 4096 (CPU page size: 4096) 00:29:31.236 Filesystem size: 510.00MiB 00:29:31.236 Block group profiles: 00:29:31.236 Data: single 8.00MiB 00:29:31.236 Metadata: DUP 32.00MiB 00:29:31.236 System: DUP 8.00MiB 00:29:31.236 SSD detected: yes 00:29:31.236 Zoned device: no 00:29:31.236 Features: extref, skinny-metadata, no-holes, free-space-tree 00:29:31.236 Checksum: crc32c 00:29:31.236 Number of devices: 1 00:29:31.236 Devices: 00:29:31.236 ID SIZE PATH 00:29:31.236 1 510.00MiB /dev/nvme0n1p1 00:29:31.236 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:29:31.236 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 71784 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:29:31.495 00:29:31.495 real 0m0.212s 00:29:31.495 user 0m0.028s 00:29:31.495 sys 0m0.063s 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:29:31.495 ************************************ 00:29:31.495 END TEST filesystem_btrfs 00:29:31.495 ************************************ 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:31.495 ************************************ 00:29:31.495 START TEST filesystem_xfs 00:29:31.495 ************************************ 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:29:31.495 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:29:31.496 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:29:31.496 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:29:31.496 05:40:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:29:31.755 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:29:31.755 = sectsz=512 attr=2, projid32bit=1 00:29:31.755 = crc=1 finobt=1, sparse=1, rmapbt=0 00:29:31.755 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:29:31.755 data = bsize=4096 blocks=130560, imaxpct=25 00:29:31.755 = sunit=0 swidth=0 blks 00:29:31.755 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:29:31.755 log =internal log bsize=4096 blocks=16384, version=2 00:29:31.755 = sectsz=512 sunit=0 blks, lazy-count=1 00:29:31.755 realtime =none extsz=4096 blocks=0, rtextents=0 00:29:32.323 Discarding blocks...Done. 00:29:32.323 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:29:32.323 05:40:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 71784 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:29:34.858 00:29:34.858 real 0m3.183s 00:29:34.858 user 0m0.020s 00:29:34.858 sys 0m0.061s 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:29:34.858 ************************************ 00:29:34.858 END TEST filesystem_xfs 00:29:34.858 ************************************ 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:34.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:29:34.858 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 71784 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 71784 ']' 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 71784 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71784 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:34.859 killing process with pid 71784 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71784' 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 71784 00:29:34.859 05:40:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 71784 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:29:35.425 00:29:35.425 real 0m13.868s 00:29:35.425 user 0m52.327s 00:29:35.425 sys 0m2.499s 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:35.425 ************************************ 00:29:35.425 END TEST nvmf_filesystem_no_in_capsule 00:29:35.425 ************************************ 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:29:35.425 ************************************ 00:29:35.425 START TEST nvmf_filesystem_in_capsule 00:29:35.425 ************************************ 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=72138 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 72138 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 72138 ']' 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:35.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:35.425 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:35.683 [2024-11-20 05:40:55.371124] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:35.683 [2024-11-20 05:40:55.371206] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.683 [2024-11-20 05:40:55.513487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:35.683 [2024-11-20 05:40:55.595207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.683 [2024-11-20 05:40:55.595263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.683 [2024-11-20 05:40:55.595274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.683 [2024-11-20 05:40:55.595283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.683 [2024-11-20 05:40:55.595290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.683 [2024-11-20 05:40:55.596670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.683 [2024-11-20 05:40:55.596801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:35.683 [2024-11-20 05:40:55.596970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:35.683 [2024-11-20 05:40:55.597031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.617 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:36.617 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:29:36.617 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:36.617 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:36.617 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:36.617 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.617 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:29:36.617 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:29:36.617 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.617 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:36.617 [2024-11-20 05:40:56.368264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.617 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.617 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:29:36.617 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.617 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:36.875 Malloc1 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:36.875 [2024-11-20 05:40:56.608519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:29:36.875 { 00:29:36.875 "aliases": [ 00:29:36.875 "6bee5f67-b95d-4694-af9f-afd4ee9ea32c" 00:29:36.875 ], 00:29:36.875 "assigned_rate_limits": { 00:29:36.875 "r_mbytes_per_sec": 0, 00:29:36.875 "rw_ios_per_sec": 0, 00:29:36.875 "rw_mbytes_per_sec": 0, 00:29:36.875 "w_mbytes_per_sec": 0 00:29:36.875 }, 00:29:36.875 "block_size": 512, 00:29:36.875 "claim_type": "exclusive_write", 00:29:36.875 "claimed": true, 00:29:36.875 "driver_specific": {}, 00:29:36.875 "memory_domains": [ 00:29:36.875 { 00:29:36.875 "dma_device_id": "system", 00:29:36.875 "dma_device_type": 1 00:29:36.875 }, 00:29:36.875 { 00:29:36.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:36.875 "dma_device_type": 2 00:29:36.875 } 00:29:36.875 ], 00:29:36.875 "name": "Malloc1", 00:29:36.875 "num_blocks": 1048576, 00:29:36.875 "product_name": "Malloc disk", 00:29:36.875 "supported_io_types": { 00:29:36.875 "abort": true, 00:29:36.875 "compare": false, 00:29:36.875 "compare_and_write": false, 00:29:36.875 "copy": true, 00:29:36.875 "flush": true, 00:29:36.875 "get_zone_info": false, 00:29:36.875 "nvme_admin": false, 00:29:36.875 "nvme_io": false, 00:29:36.875 "nvme_io_md": false, 00:29:36.875 "nvme_iov_md": false, 00:29:36.875 "read": true, 00:29:36.875 "reset": true, 00:29:36.875 "seek_data": false, 00:29:36.875 "seek_hole": false, 00:29:36.875 "unmap": true, 00:29:36.875 "write": true, 00:29:36.875 "write_zeroes": true, 00:29:36.875 "zcopy": true, 00:29:36.875 "zone_append": false, 00:29:36.875 "zone_management": false 00:29:36.875 }, 00:29:36.875 "uuid": "6bee5f67-b95d-4694-af9f-afd4ee9ea32c", 00:29:36.875 "zoned": false 00:29:36.875 } 00:29:36.875 ]' 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:29:36.875 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:29:36.876 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:29:36.876 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:29:36.876 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:29:36.876 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:29:36.876 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:29:37.133 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:29:37.133 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:29:37.133 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:29:37.133 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:29:37.133 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:29:39.033 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:29:39.033 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:29:39.033 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:29:39.033 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:29:39.033 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:29:39.033 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:29:39.033 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:29:39.033 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:29:39.033 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:29:39.292 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:29:39.292 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:29:39.292 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:39.292 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:29:39.292 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:29:39.292 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:29:39.292 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:29:39.292 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:29:39.292 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:29:39.292 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:40.308 ************************************ 00:29:40.308 START TEST filesystem_in_capsule_ext4 00:29:40.308 ************************************ 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:29:40.308 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:29:40.308 mke2fs 1.47.0 (5-Feb-2023) 00:29:40.308 Discarding device blocks: 0/522240 done 00:29:40.308 Creating filesystem with 522240 1k blocks and 130560 inodes 00:29:40.308 Filesystem UUID: 42f22299-8a54-4cd8-9ba9-0a9f6cecb579 00:29:40.308 Superblock backups stored on blocks: 00:29:40.308 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:29:40.308 00:29:40.308 Allocating group tables: 0/64 done 00:29:40.308 Writing inode tables: 0/64 done 00:29:40.565 Creating journal (8192 blocks): done 00:29:40.565 Writing superblocks and filesystem accounting information: 0/64 done 00:29:40.565 00:29:40.565 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:29:40.565 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:29:45.828 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:29:45.828 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:29:45.828 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:29:45.828 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:29:45.828 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:29:45.828 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:29:45.828 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 72138 00:29:45.828 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:29:45.828 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:29:46.087 ************************************ 00:29:46.087 END TEST filesystem_in_capsule_ext4 00:29:46.087 ************************************ 00:29:46.087 00:29:46.087 real 0m5.621s 00:29:46.087 user 0m0.029s 00:29:46.087 sys 0m0.072s 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:46.087 ************************************ 00:29:46.087 START TEST filesystem_in_capsule_btrfs 00:29:46.087 ************************************ 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:29:46.087 05:41:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:29:46.345 btrfs-progs v6.8.1 00:29:46.345 See https://btrfs.readthedocs.io for more information. 00:29:46.345 00:29:46.345 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:29:46.345 NOTE: several default settings have changed in version 5.15, please make sure 00:29:46.345 this does not affect your deployments: 00:29:46.345 - DUP for metadata (-m dup) 00:29:46.345 - enabled no-holes (-O no-holes) 00:29:46.345 - enabled free-space-tree (-R free-space-tree) 00:29:46.345 00:29:46.345 Label: (null) 00:29:46.345 UUID: bc80fa8f-9010-4847-8ed9-d0aab6e8fdbb 00:29:46.345 Node size: 16384 00:29:46.345 Sector size: 4096 (CPU page size: 4096) 00:29:46.345 Filesystem size: 510.00MiB 00:29:46.345 Block group profiles: 00:29:46.345 Data: single 8.00MiB 00:29:46.345 Metadata: DUP 32.00MiB 00:29:46.345 System: DUP 8.00MiB 00:29:46.345 SSD detected: yes 00:29:46.345 Zoned device: no 00:29:46.345 Features: extref, skinny-metadata, no-holes, free-space-tree 00:29:46.345 Checksum: crc32c 00:29:46.345 Number of devices: 1 00:29:46.345 Devices: 00:29:46.345 ID SIZE PATH 00:29:46.345 1 510.00MiB /dev/nvme0n1p1 00:29:46.345 00:29:46.345 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:29:46.345 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:29:46.345 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:29:46.345 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:29:46.345 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:29:46.345 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:29:46.345 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:29:46.345 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:29:46.345 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 72138 00:29:46.345 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:29:46.345 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:29:46.345 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:29:46.345 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:29:46.345 ************************************ 00:29:46.346 END TEST filesystem_in_capsule_btrfs 00:29:46.346 ************************************ 00:29:46.346 00:29:46.346 real 0m0.338s 00:29:46.346 user 0m0.030s 00:29:46.346 sys 0m0.070s 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:46.346 ************************************ 00:29:46.346 START TEST filesystem_in_capsule_xfs 00:29:46.346 ************************************ 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:29:46.346 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:29:46.604 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:29:46.604 = sectsz=512 attr=2, projid32bit=1 00:29:46.604 = crc=1 finobt=1, sparse=1, rmapbt=0 00:29:46.604 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:29:46.604 data = bsize=4096 blocks=130560, imaxpct=25 00:29:46.604 = sunit=0 swidth=0 blks 00:29:46.604 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:29:46.604 log =internal log bsize=4096 blocks=16384, version=2 00:29:46.604 = sectsz=512 sunit=0 blks, lazy-count=1 00:29:46.604 realtime =none extsz=4096 blocks=0, rtextents=0 00:29:47.169 Discarding blocks...Done. 00:29:47.169 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:29:47.169 05:41:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 72138 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:29:49.077 ************************************ 00:29:49.077 END TEST filesystem_in_capsule_xfs 00:29:49.077 ************************************ 00:29:49.077 00:29:49.077 real 0m2.645s 00:29:49.077 user 0m0.030s 00:29:49.077 sys 0m0.060s 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:29:49.077 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:49.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:49.078 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:49.078 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:29:49.372 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:29:49.372 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:49.372 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:29:49.372 05:41:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 72138 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 72138 ']' 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 72138 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72138 00:29:49.372 killing process with pid 72138 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72138' 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 72138 00:29:49.372 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 72138 00:29:49.633 ************************************ 00:29:49.633 END TEST nvmf_filesystem_in_capsule 00:29:49.633 ************************************ 00:29:49.633 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:29:49.633 00:29:49.633 real 0m14.103s 00:29:49.633 user 0m53.630s 00:29:49.633 sys 0m2.479s 00:29:49.633 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:49.633 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:29:49.633 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:29:49.633 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:49.633 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:29:49.633 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:49.633 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:29:49.633 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:49.633 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:49.633 rmmod nvme_tcp 00:29:49.633 rmmod nvme_fabrics 00:29:49.633 rmmod nvme_keyring 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:49.891 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:50.149 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:50.149 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.149 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.149 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.149 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:29:50.149 00:29:50.149 real 0m29.284s 00:29:50.149 user 1m46.353s 00:29:50.149 sys 0m5.560s 00:29:50.149 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:50.149 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:29:50.149 ************************************ 00:29:50.149 END TEST nvmf_filesystem 00:29:50.149 ************************************ 00:29:50.149 05:41:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:29:50.149 05:41:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:50.149 05:41:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:50.149 05:41:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:50.149 ************************************ 00:29:50.149 START TEST nvmf_target_discovery 00:29:50.149 ************************************ 00:29:50.149 05:41:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:29:50.149 * Looking for test storage... 00:29:50.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:50.149 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:50.149 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:29:50.149 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.408 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:50.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.408 --rc genhtml_branch_coverage=1 00:29:50.408 --rc genhtml_function_coverage=1 00:29:50.408 --rc genhtml_legend=1 00:29:50.409 --rc geninfo_all_blocks=1 00:29:50.409 --rc geninfo_unexecuted_blocks=1 00:29:50.409 00:29:50.409 ' 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:50.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.409 --rc genhtml_branch_coverage=1 00:29:50.409 --rc genhtml_function_coverage=1 00:29:50.409 --rc genhtml_legend=1 00:29:50.409 --rc geninfo_all_blocks=1 00:29:50.409 --rc geninfo_unexecuted_blocks=1 00:29:50.409 00:29:50.409 ' 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:50.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.409 --rc genhtml_branch_coverage=1 00:29:50.409 --rc genhtml_function_coverage=1 00:29:50.409 --rc genhtml_legend=1 00:29:50.409 --rc geninfo_all_blocks=1 00:29:50.409 --rc geninfo_unexecuted_blocks=1 00:29:50.409 00:29:50.409 ' 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:50.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.409 --rc genhtml_branch_coverage=1 00:29:50.409 --rc genhtml_function_coverage=1 00:29:50.409 --rc genhtml_legend=1 00:29:50.409 --rc geninfo_all_blocks=1 00:29:50.409 --rc geninfo_unexecuted_blocks=1 00:29:50.409 00:29:50.409 ' 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:50.409 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.409 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:50.410 Cannot find device "nvmf_init_br" 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:50.410 Cannot find device "nvmf_init_br2" 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:50.410 Cannot find device "nvmf_tgt_br" 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:50.410 Cannot find device "nvmf_tgt_br2" 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:50.410 Cannot find device "nvmf_init_br" 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:50.410 Cannot find device "nvmf_init_br2" 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:50.410 Cannot find device "nvmf_tgt_br" 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:50.410 Cannot find device "nvmf_tgt_br2" 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:50.410 Cannot find device "nvmf_br" 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:50.410 Cannot find device "nvmf_init_if" 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:50.410 Cannot find device "nvmf_init_if2" 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:50.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:50.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:50.410 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:50.668 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:50.669 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:50.669 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:29:50.669 00:29:50.669 --- 10.0.0.3 ping statistics --- 00:29:50.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.669 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:50.669 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:50.669 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:29:50.669 00:29:50.669 --- 10.0.0.4 ping statistics --- 00:29:50.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.669 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:50.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:29:50.669 00:29:50.669 --- 10.0.0.1 ping statistics --- 00:29:50.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.669 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:50.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:29:50.669 00:29:50.669 --- 10.0.0.2 ping statistics --- 00:29:50.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.669 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:50.669 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:50.927 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:29:50.927 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:50.927 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:50.927 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.927 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=72724 00:29:50.927 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 72724 00:29:50.927 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:50.927 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 72724 ']' 00:29:50.927 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.927 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:50.927 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.927 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:50.927 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.927 [2024-11-20 05:41:10.651131] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:50.927 [2024-11-20 05:41:10.651735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.927 [2024-11-20 05:41:10.797539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:51.186 [2024-11-20 05:41:10.849062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.186 [2024-11-20 05:41:10.849108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.186 [2024-11-20 05:41:10.849118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.186 [2024-11-20 05:41:10.849128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.186 [2024-11-20 05:41:10.849136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.186 [2024-11-20 05:41:10.850027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.186 [2024-11-20 05:41:10.850141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.186 [2024-11-20 05:41:10.850295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.186 [2024-11-20 05:41:10.850300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:51.186 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:51.186 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:29:51.186 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:51.186 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:51.186 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.186 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.186 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:51.186 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.186 [2024-11-20 05:41:11.008779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.186 Null1 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.186 [2024-11-20 05:41:11.052987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.186 Null2 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.186 Null3 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.186 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.445 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.445 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:29:51.445 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.445 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.445 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.445 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.446 Null4 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -a 10.0.0.3 -s 4420 00:29:51.446 00:29:51.446 Discovery Log Number of Records 6, Generation counter 6 00:29:51.446 =====Discovery Log Entry 0====== 00:29:51.446 trtype: tcp 00:29:51.446 adrfam: ipv4 00:29:51.446 subtype: current discovery subsystem 00:29:51.446 treq: not required 00:29:51.446 portid: 0 00:29:51.446 trsvcid: 4420 00:29:51.446 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:51.446 traddr: 10.0.0.3 00:29:51.446 eflags: explicit discovery connections, duplicate discovery information 00:29:51.446 sectype: none 00:29:51.446 =====Discovery Log Entry 1====== 00:29:51.446 trtype: tcp 00:29:51.446 adrfam: ipv4 00:29:51.446 subtype: nvme subsystem 00:29:51.446 treq: not required 00:29:51.446 portid: 0 00:29:51.446 trsvcid: 4420 00:29:51.446 subnqn: nqn.2016-06.io.spdk:cnode1 00:29:51.446 traddr: 10.0.0.3 00:29:51.446 eflags: none 00:29:51.446 sectype: none 00:29:51.446 =====Discovery Log Entry 2====== 00:29:51.446 trtype: tcp 00:29:51.446 adrfam: ipv4 00:29:51.446 subtype: nvme subsystem 00:29:51.446 treq: not required 00:29:51.446 portid: 0 00:29:51.446 trsvcid: 4420 00:29:51.446 subnqn: nqn.2016-06.io.spdk:cnode2 00:29:51.446 traddr: 10.0.0.3 00:29:51.446 eflags: none 00:29:51.446 sectype: none 00:29:51.446 =====Discovery Log Entry 3====== 00:29:51.446 trtype: tcp 00:29:51.446 adrfam: ipv4 00:29:51.446 subtype: nvme subsystem 00:29:51.446 treq: not required 00:29:51.446 portid: 0 00:29:51.446 trsvcid: 4420 00:29:51.446 subnqn: nqn.2016-06.io.spdk:cnode3 00:29:51.446 traddr: 10.0.0.3 00:29:51.446 eflags: none 00:29:51.446 sectype: none 00:29:51.446 =====Discovery Log Entry 4====== 00:29:51.446 trtype: tcp 00:29:51.446 adrfam: ipv4 00:29:51.446 subtype: nvme subsystem 00:29:51.446 treq: not required 00:29:51.446 portid: 0 00:29:51.446 trsvcid: 4420 00:29:51.446 subnqn: nqn.2016-06.io.spdk:cnode4 00:29:51.446 traddr: 10.0.0.3 00:29:51.446 eflags: none 00:29:51.446 sectype: none 00:29:51.446 =====Discovery Log Entry 5====== 00:29:51.446 trtype: tcp 00:29:51.446 adrfam: ipv4 00:29:51.446 subtype: discovery subsystem referral 00:29:51.446 treq: not required 00:29:51.446 portid: 0 00:29:51.446 trsvcid: 4430 00:29:51.446 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:51.446 traddr: 10.0.0.3 00:29:51.446 eflags: none 00:29:51.446 sectype: none 00:29:51.446 Perform nvmf subsystem discovery via RPC 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.446 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.446 [ 00:29:51.446 { 00:29:51.446 "allow_any_host": true, 00:29:51.446 "hosts": [], 00:29:51.446 "listen_addresses": [ 00:29:51.446 { 00:29:51.446 "adrfam": "IPv4", 00:29:51.446 "traddr": "10.0.0.3", 00:29:51.446 "trsvcid": "4420", 00:29:51.446 "trtype": "TCP" 00:29:51.446 } 00:29:51.446 ], 00:29:51.446 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:51.446 "subtype": "Discovery" 00:29:51.446 }, 00:29:51.446 { 00:29:51.446 "allow_any_host": true, 00:29:51.446 "hosts": [], 00:29:51.446 "listen_addresses": [ 00:29:51.446 { 00:29:51.446 "adrfam": "IPv4", 00:29:51.446 "traddr": "10.0.0.3", 00:29:51.446 "trsvcid": "4420", 00:29:51.446 "trtype": "TCP" 00:29:51.446 } 00:29:51.446 ], 00:29:51.446 "max_cntlid": 65519, 00:29:51.446 "max_namespaces": 32, 00:29:51.446 "min_cntlid": 1, 00:29:51.446 "model_number": "SPDK bdev Controller", 00:29:51.446 "namespaces": [ 00:29:51.446 { 00:29:51.446 "bdev_name": "Null1", 00:29:51.446 "name": "Null1", 00:29:51.446 "nguid": "611917D70A934589A6E5C79A1FD4B4C4", 00:29:51.446 "nsid": 1, 00:29:51.446 "uuid": "611917d7-0a93-4589-a6e5-c79a1fd4b4c4" 00:29:51.446 } 00:29:51.446 ], 00:29:51.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.446 "serial_number": "SPDK00000000000001", 00:29:51.446 "subtype": "NVMe" 00:29:51.446 }, 00:29:51.446 { 00:29:51.446 "allow_any_host": true, 00:29:51.446 "hosts": [], 00:29:51.446 "listen_addresses": [ 00:29:51.446 { 00:29:51.446 "adrfam": "IPv4", 00:29:51.446 "traddr": "10.0.0.3", 00:29:51.446 "trsvcid": "4420", 00:29:51.446 "trtype": "TCP" 00:29:51.446 } 00:29:51.446 ], 00:29:51.446 "max_cntlid": 65519, 00:29:51.446 "max_namespaces": 32, 00:29:51.446 "min_cntlid": 1, 00:29:51.446 "model_number": "SPDK bdev Controller", 00:29:51.446 "namespaces": [ 00:29:51.446 { 00:29:51.446 "bdev_name": "Null2", 00:29:51.446 "name": "Null2", 00:29:51.446 "nguid": "9ED9F3BFBB984A2C9E416BD1ADCBC5CB", 00:29:51.446 "nsid": 1, 00:29:51.446 "uuid": "9ed9f3bf-bb98-4a2c-9e41-6bd1adcbc5cb" 00:29:51.446 } 00:29:51.447 ], 00:29:51.447 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:29:51.447 "serial_number": "SPDK00000000000002", 00:29:51.447 "subtype": "NVMe" 00:29:51.447 }, 00:29:51.447 { 00:29:51.447 "allow_any_host": true, 00:29:51.447 "hosts": [], 00:29:51.447 "listen_addresses": [ 00:29:51.447 { 00:29:51.447 "adrfam": "IPv4", 00:29:51.447 "traddr": "10.0.0.3", 00:29:51.447 "trsvcid": "4420", 00:29:51.447 "trtype": "TCP" 00:29:51.447 } 00:29:51.447 ], 00:29:51.447 "max_cntlid": 65519, 00:29:51.447 "max_namespaces": 32, 00:29:51.447 "min_cntlid": 1, 00:29:51.447 "model_number": "SPDK bdev Controller", 00:29:51.447 "namespaces": [ 00:29:51.447 { 00:29:51.447 "bdev_name": "Null3", 00:29:51.447 "name": "Null3", 00:29:51.447 "nguid": "ECD022D48C024795A5A2E8DE7EF2C090", 00:29:51.447 "nsid": 1, 00:29:51.447 "uuid": "ecd022d4-8c02-4795-a5a2-e8de7ef2c090" 00:29:51.447 } 00:29:51.447 ], 00:29:51.447 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:29:51.447 "serial_number": "SPDK00000000000003", 00:29:51.447 "subtype": "NVMe" 00:29:51.447 }, 00:29:51.447 { 00:29:51.447 "allow_any_host": true, 00:29:51.447 "hosts": [], 00:29:51.447 "listen_addresses": [ 00:29:51.447 { 00:29:51.447 "adrfam": "IPv4", 00:29:51.447 "traddr": "10.0.0.3", 00:29:51.447 "trsvcid": "4420", 00:29:51.447 "trtype": "TCP" 00:29:51.447 } 00:29:51.447 ], 00:29:51.447 "max_cntlid": 65519, 00:29:51.447 "max_namespaces": 32, 00:29:51.447 "min_cntlid": 1, 00:29:51.447 "model_number": "SPDK bdev Controller", 00:29:51.447 "namespaces": [ 00:29:51.447 { 00:29:51.447 "bdev_name": "Null4", 00:29:51.447 "name": "Null4", 00:29:51.447 "nguid": "82B65A3C34144B398386DD093C035ED1", 00:29:51.447 "nsid": 1, 00:29:51.447 "uuid": "82b65a3c-3414-4b39-8386-dd093c035ed1" 00:29:51.447 } 00:29:51.447 ], 00:29:51.447 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:29:51.447 "serial_number": "SPDK00000000000004", 00:29:51.447 "subtype": "NVMe" 00:29:51.447 } 00:29:51.447 ] 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.447 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:51.707 rmmod nvme_tcp 00:29:51.707 rmmod nvme_fabrics 00:29:51.707 rmmod nvme_keyring 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 72724 ']' 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 72724 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 72724 ']' 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 72724 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72724 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72724' 00:29:51.707 killing process with pid 72724 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 72724 00:29:51.707 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 72724 00:29:51.966 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:51.966 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:51.967 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:51.967 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:29:51.967 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:29:51.967 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:29:51.967 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:51.967 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:51.967 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:51.967 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:51.967 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:51.967 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:52.226 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:52.226 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:52.226 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:52.226 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:52.226 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:52.226 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:52.226 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:52.226 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:52.226 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:52.226 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:52.226 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:52.226 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.226 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.226 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.226 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:29:52.226 00:29:52.226 real 0m2.159s 00:29:52.226 user 0m4.213s 00:29:52.226 sys 0m0.738s 00:29:52.226 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:52.226 ************************************ 00:29:52.226 END TEST nvmf_target_discovery 00:29:52.226 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.226 ************************************ 00:29:52.226 05:41:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:29:52.226 05:41:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:52.226 05:41:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:52.226 05:41:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:52.226 ************************************ 00:29:52.226 START TEST nvmf_referrals 00:29:52.226 ************************************ 00:29:52.226 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:29:52.486 * Looking for test storage... 00:29:52.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:52.486 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:52.486 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:52.486 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:29:52.486 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:52.486 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.486 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.486 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.486 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.486 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.486 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.486 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.486 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.486 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.486 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.486 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:52.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.487 --rc genhtml_branch_coverage=1 00:29:52.487 --rc genhtml_function_coverage=1 00:29:52.487 --rc genhtml_legend=1 00:29:52.487 --rc geninfo_all_blocks=1 00:29:52.487 --rc geninfo_unexecuted_blocks=1 00:29:52.487 00:29:52.487 ' 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:52.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.487 --rc genhtml_branch_coverage=1 00:29:52.487 --rc genhtml_function_coverage=1 00:29:52.487 --rc genhtml_legend=1 00:29:52.487 --rc geninfo_all_blocks=1 00:29:52.487 --rc geninfo_unexecuted_blocks=1 00:29:52.487 00:29:52.487 ' 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:52.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.487 --rc genhtml_branch_coverage=1 00:29:52.487 --rc genhtml_function_coverage=1 00:29:52.487 --rc genhtml_legend=1 00:29:52.487 --rc geninfo_all_blocks=1 00:29:52.487 --rc geninfo_unexecuted_blocks=1 00:29:52.487 00:29:52.487 ' 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:52.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.487 --rc genhtml_branch_coverage=1 00:29:52.487 --rc genhtml_function_coverage=1 00:29:52.487 --rc genhtml_legend=1 00:29:52.487 --rc geninfo_all_blocks=1 00:29:52.487 --rc geninfo_unexecuted_blocks=1 00:29:52.487 00:29:52.487 ' 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:52.487 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:52.487 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:52.488 Cannot find device "nvmf_init_br" 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:52.488 Cannot find device "nvmf_init_br2" 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:29:52.488 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:52.747 Cannot find device "nvmf_tgt_br" 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:52.747 Cannot find device "nvmf_tgt_br2" 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:52.747 Cannot find device "nvmf_init_br" 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:52.747 Cannot find device "nvmf_init_br2" 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:52.747 Cannot find device "nvmf_tgt_br" 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:52.747 Cannot find device "nvmf_tgt_br2" 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:52.747 Cannot find device "nvmf_br" 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:52.747 Cannot find device "nvmf_init_if" 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:52.747 Cannot find device "nvmf_init_if2" 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:52.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:52.747 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:52.748 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:52.748 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:53.007 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:53.008 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:53.008 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:29:53.008 00:29:53.008 --- 10.0.0.3 ping statistics --- 00:29:53.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.008 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:53.008 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:53.008 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:29:53.008 00:29:53.008 --- 10.0.0.4 ping statistics --- 00:29:53.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.008 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:53.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:29:53.008 00:29:53.008 --- 10.0.0.1 ping statistics --- 00:29:53.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.008 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:53.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:29:53.008 00:29:53.008 --- 10.0.0.2 ping statistics --- 00:29:53.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.008 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=72992 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 72992 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 72992 ']' 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:53.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:53.008 05:41:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:53.008 [2024-11-20 05:41:12.904117] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:53.008 [2024-11-20 05:41:12.904231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.267 [2024-11-20 05:41:13.065135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:53.267 [2024-11-20 05:41:13.159813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.267 [2024-11-20 05:41:13.159894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.267 [2024-11-20 05:41:13.159911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.267 [2024-11-20 05:41:13.159925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.267 [2024-11-20 05:41:13.159936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.267 [2024-11-20 05:41:13.161514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.267 [2024-11-20 05:41:13.161618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.267 [2024-11-20 05:41:13.161850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:53.267 [2024-11-20 05:41:13.161856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.204 [2024-11-20 05:41:13.948706] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.204 [2024-11-20 05:41:13.964972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.204 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.205 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:29:54.205 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.205 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.205 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.205 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:29:54.205 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.205 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.205 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.205 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:29:54.205 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.205 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.205 05:41:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -a 10.0.0.3 -s 8009 -o json 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:29:54.205 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -a 10.0.0.3 -s 8009 -o json 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:29:54.464 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -a 10.0.0.3 -s 8009 -o json 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:29:54.723 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -a 10.0.0.3 -s 8009 -o json 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -a 10.0.0.3 -s 8009 -o json 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:29:54.982 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:29:55.241 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.241 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:29:55.241 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:29:55.241 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:29:55.241 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:29:55.241 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:29:55.241 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -a 10.0.0.3 -s 8009 -o json 00:29:55.241 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:29:55.241 05:41:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:29:55.241 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:29:55.241 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:29:55.241 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:29:55.241 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:29:55.241 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:29:55.241 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:29:55.241 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -a 10.0.0.3 -s 8009 -o json 00:29:55.503 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:29:55.503 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:29:55.503 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:29:55.503 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:29:55.503 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -a 10.0.0.3 -s 8009 -o json 00:29:55.503 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:29:55.503 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:29:55.503 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:29:55.503 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.504 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:55.504 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.504 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:29:55.504 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:29:55.504 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.504 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:55.504 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.504 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:29:55.504 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:29:55.504 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:29:55.504 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:29:55.504 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -a 10.0.0.3 -s 8009 -o json 00:29:55.504 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:29:55.504 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:55.772 rmmod nvme_tcp 00:29:55.772 rmmod nvme_fabrics 00:29:55.772 rmmod nvme_keyring 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 72992 ']' 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 72992 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 72992 ']' 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 72992 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72992 00:29:55.772 killing process with pid 72992 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72992' 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 72992 00:29:55.772 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 72992 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:56.029 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:56.287 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:56.287 05:41:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:56.287 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:56.287 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:56.287 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:56.287 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.287 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.287 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.287 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:29:56.287 ************************************ 00:29:56.287 END TEST nvmf_referrals 00:29:56.287 ************************************ 00:29:56.287 00:29:56.287 real 0m3.946s 00:29:56.287 user 0m11.625s 00:29:56.287 sys 0m1.242s 00:29:56.287 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:56.287 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:29:56.287 05:41:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:29:56.287 05:41:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:56.287 05:41:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:56.287 05:41:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:56.287 ************************************ 00:29:56.287 START TEST nvmf_connect_disconnect 00:29:56.287 ************************************ 00:29:56.287 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:29:56.545 * Looking for test storage... 00:29:56.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:56.545 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:56.545 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:56.545 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:56.545 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:56.545 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:56.545 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:56.545 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:56.545 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:56.545 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:56.545 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:56.545 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:56.545 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:56.545 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:56.545 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:56.545 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:56.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.546 --rc genhtml_branch_coverage=1 00:29:56.546 --rc genhtml_function_coverage=1 00:29:56.546 --rc genhtml_legend=1 00:29:56.546 --rc geninfo_all_blocks=1 00:29:56.546 --rc geninfo_unexecuted_blocks=1 00:29:56.546 00:29:56.546 ' 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:56.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.546 --rc genhtml_branch_coverage=1 00:29:56.546 --rc genhtml_function_coverage=1 00:29:56.546 --rc genhtml_legend=1 00:29:56.546 --rc geninfo_all_blocks=1 00:29:56.546 --rc geninfo_unexecuted_blocks=1 00:29:56.546 00:29:56.546 ' 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:56.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.546 --rc genhtml_branch_coverage=1 00:29:56.546 --rc genhtml_function_coverage=1 00:29:56.546 --rc genhtml_legend=1 00:29:56.546 --rc geninfo_all_blocks=1 00:29:56.546 --rc geninfo_unexecuted_blocks=1 00:29:56.546 00:29:56.546 ' 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:56.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.546 --rc genhtml_branch_coverage=1 00:29:56.546 --rc genhtml_function_coverage=1 00:29:56.546 --rc genhtml_legend=1 00:29:56.546 --rc geninfo_all_blocks=1 00:29:56.546 --rc geninfo_unexecuted_blocks=1 00:29:56.546 00:29:56.546 ' 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:56.546 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:56.546 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:56.547 Cannot find device "nvmf_init_br" 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:56.547 Cannot find device "nvmf_init_br2" 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:56.547 Cannot find device "nvmf_tgt_br" 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:56.547 Cannot find device "nvmf_tgt_br2" 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:56.547 Cannot find device "nvmf_init_br" 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:56.547 Cannot find device "nvmf_init_br2" 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:29:56.547 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:56.806 Cannot find device "nvmf_tgt_br" 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:56.806 Cannot find device "nvmf_tgt_br2" 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:56.806 Cannot find device "nvmf_br" 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:56.806 Cannot find device "nvmf_init_if" 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:56.806 Cannot find device "nvmf_init_if2" 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:56.806 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:56.806 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:56.806 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:57.066 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:57.066 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:29:57.066 00:29:57.066 --- 10.0.0.3 ping statistics --- 00:29:57.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.066 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:57.066 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:57.066 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:29:57.066 00:29:57.066 --- 10.0.0.4 ping statistics --- 00:29:57.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.066 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:57.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:29:57.066 00:29:57.066 --- 10.0.0.1 ping statistics --- 00:29:57.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.066 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:57.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:29:57.066 00:29:57.066 --- 10.0.0.2 ping statistics --- 00:29:57.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.066 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=73352 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 73352 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 73352 ']' 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:57.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:57.066 05:41:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:57.066 [2024-11-20 05:41:16.912695] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:57.066 [2024-11-20 05:41:16.912802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.325 [2024-11-20 05:41:17.065785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.325 [2024-11-20 05:41:17.113924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.325 [2024-11-20 05:41:17.113973] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.325 [2024-11-20 05:41:17.113983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.325 [2024-11-20 05:41:17.113992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.325 [2024-11-20 05:41:17.113999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.325 [2024-11-20 05:41:17.114871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.325 [2024-11-20 05:41:17.114956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.325 [2024-11-20 05:41:17.115043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.325 [2024-11-20 05:41:17.115049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.263 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:58.263 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:29:58.263 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:58.263 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:58.263 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:58.263 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.263 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:29:58.263 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.263 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:58.263 [2024-11-20 05:41:17.964214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.263 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.263 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:29:58.264 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.264 05:41:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:58.264 [2024-11-20 05:41:18.035774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:29:58.264 05:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:30:00.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:02.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:05.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:07.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:09.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:09.666 rmmod nvme_tcp 00:30:09.666 rmmod nvme_fabrics 00:30:09.666 rmmod nvme_keyring 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 73352 ']' 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 73352 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 73352 ']' 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 73352 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73352 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:09.666 killing process with pid 73352 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73352' 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 73352 00:30:09.666 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 73352 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:30:09.924 ************************************ 00:30:09.924 END TEST nvmf_connect_disconnect 00:30:09.924 00:30:09.924 real 0m13.707s 00:30:09.924 user 0m48.205s 00:30:09.924 sys 0m3.068s 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:09.924 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:09.924 ************************************ 00:30:10.182 05:41:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:30:10.182 05:41:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:10.182 05:41:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:10.182 05:41:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:10.182 ************************************ 00:30:10.182 START TEST nvmf_multitarget 00:30:10.182 ************************************ 00:30:10.182 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:30:10.182 * Looking for test storage... 00:30:10.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:10.182 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:10.182 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:30:10.182 05:41:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:10.182 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:10.182 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.182 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.182 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.182 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.182 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.182 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:10.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.183 --rc genhtml_branch_coverage=1 00:30:10.183 --rc genhtml_function_coverage=1 00:30:10.183 --rc genhtml_legend=1 00:30:10.183 --rc geninfo_all_blocks=1 00:30:10.183 --rc geninfo_unexecuted_blocks=1 00:30:10.183 00:30:10.183 ' 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:10.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.183 --rc genhtml_branch_coverage=1 00:30:10.183 --rc genhtml_function_coverage=1 00:30:10.183 --rc genhtml_legend=1 00:30:10.183 --rc geninfo_all_blocks=1 00:30:10.183 --rc geninfo_unexecuted_blocks=1 00:30:10.183 00:30:10.183 ' 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:10.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.183 --rc genhtml_branch_coverage=1 00:30:10.183 --rc genhtml_function_coverage=1 00:30:10.183 --rc genhtml_legend=1 00:30:10.183 --rc geninfo_all_blocks=1 00:30:10.183 --rc geninfo_unexecuted_blocks=1 00:30:10.183 00:30:10.183 ' 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:10.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.183 --rc genhtml_branch_coverage=1 00:30:10.183 --rc genhtml_function_coverage=1 00:30:10.183 --rc genhtml_legend=1 00:30:10.183 --rc geninfo_all_blocks=1 00:30:10.183 --rc geninfo_unexecuted_blocks=1 00:30:10.183 00:30:10.183 ' 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:10.183 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.183 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:10.452 Cannot find device "nvmf_init_br" 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:10.452 Cannot find device "nvmf_init_br2" 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:10.452 Cannot find device "nvmf_tgt_br" 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:10.452 Cannot find device "nvmf_tgt_br2" 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:10.452 Cannot find device "nvmf_init_br" 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:10.452 Cannot find device "nvmf_init_br2" 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:10.452 Cannot find device "nvmf_tgt_br" 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:10.452 Cannot find device "nvmf_tgt_br2" 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:10.452 Cannot find device "nvmf_br" 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:10.452 Cannot find device "nvmf_init_if" 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:10.452 Cannot find device "nvmf_init_if2" 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:10.452 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:10.452 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:10.452 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:10.721 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:10.721 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:30:10.721 00:30:10.721 --- 10.0.0.3 ping statistics --- 00:30:10.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.721 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:10.721 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:10.721 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:30:10.721 00:30:10.721 --- 10.0.0.4 ping statistics --- 00:30:10.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.721 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:10.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:10.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:30:10.721 00:30:10.721 --- 10.0.0.1 ping statistics --- 00:30:10.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.721 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:10.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:10.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:30:10.721 00:30:10.721 --- 10.0.0.2 ping statistics --- 00:30:10.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.721 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=73813 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 73813 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 73813 ']' 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:10.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:10.721 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:30:10.721 [2024-11-20 05:41:30.568865] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:30:10.721 [2024-11-20 05:41:30.568987] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.980 [2024-11-20 05:41:30.719549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:10.980 [2024-11-20 05:41:30.781058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.980 [2024-11-20 05:41:30.781110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.980 [2024-11-20 05:41:30.781120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:10.980 [2024-11-20 05:41:30.781130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:10.980 [2024-11-20 05:41:30.781138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.980 [2024-11-20 05:41:30.782102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.980 [2024-11-20 05:41:30.782236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:10.980 [2024-11-20 05:41:30.782241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.980 [2024-11-20 05:41:30.782179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:10.980 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:10.980 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:30:10.980 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:10.980 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:10.980 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:30:11.238 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.238 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:30:11.238 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:30:11.238 05:41:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:30:11.238 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:30:11.238 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:30:11.496 "nvmf_tgt_1" 00:30:11.496 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:30:11.496 "nvmf_tgt_2" 00:30:11.496 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:30:11.496 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:30:11.754 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:30:11.754 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:30:11.754 true 00:30:11.754 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:30:12.012 true 00:30:12.012 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:30:12.012 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:30:12.012 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:30:12.012 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:30:12.012 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:30:12.012 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:12.012 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:30:12.012 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:12.012 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:30:12.012 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:12.012 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:12.012 rmmod nvme_tcp 00:30:12.012 rmmod nvme_fabrics 00:30:12.012 rmmod nvme_keyring 00:30:12.271 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:12.271 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:30:12.271 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:30:12.271 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 73813 ']' 00:30:12.271 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 73813 00:30:12.271 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 73813 ']' 00:30:12.271 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 73813 00:30:12.271 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:30:12.271 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:12.271 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73813 00:30:12.271 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:12.271 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:12.271 killing process with pid 73813 00:30:12.271 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73813' 00:30:12.271 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 73813 00:30:12.271 05:41:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 73813 00:30:12.271 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:12.271 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:12.271 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:12.271 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:30:12.271 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:30:12.271 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:12.271 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:30:12.271 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:12.271 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:12.271 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:12.271 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:12.529 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:30:12.530 00:30:12.530 real 0m2.535s 00:30:12.530 user 0m6.637s 00:30:12.530 sys 0m0.767s 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:12.530 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:30:12.530 ************************************ 00:30:12.530 END TEST nvmf_multitarget 00:30:12.530 ************************************ 00:30:12.788 05:41:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:30:12.788 05:41:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:12.788 05:41:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:12.788 05:41:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:12.788 ************************************ 00:30:12.788 START TEST nvmf_rpc 00:30:12.788 ************************************ 00:30:12.788 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:30:12.789 * Looking for test storage... 00:30:12.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:12.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.789 --rc genhtml_branch_coverage=1 00:30:12.789 --rc genhtml_function_coverage=1 00:30:12.789 --rc genhtml_legend=1 00:30:12.789 --rc geninfo_all_blocks=1 00:30:12.789 --rc geninfo_unexecuted_blocks=1 00:30:12.789 00:30:12.789 ' 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:12.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.789 --rc genhtml_branch_coverage=1 00:30:12.789 --rc genhtml_function_coverage=1 00:30:12.789 --rc genhtml_legend=1 00:30:12.789 --rc geninfo_all_blocks=1 00:30:12.789 --rc geninfo_unexecuted_blocks=1 00:30:12.789 00:30:12.789 ' 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:12.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.789 --rc genhtml_branch_coverage=1 00:30:12.789 --rc genhtml_function_coverage=1 00:30:12.789 --rc genhtml_legend=1 00:30:12.789 --rc geninfo_all_blocks=1 00:30:12.789 --rc geninfo_unexecuted_blocks=1 00:30:12.789 00:30:12.789 ' 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:12.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.789 --rc genhtml_branch_coverage=1 00:30:12.789 --rc genhtml_function_coverage=1 00:30:12.789 --rc genhtml_legend=1 00:30:12.789 --rc geninfo_all_blocks=1 00:30:12.789 --rc geninfo_unexecuted_blocks=1 00:30:12.789 00:30:12.789 ' 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:12.789 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:12.790 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:12.790 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:13.048 Cannot find device "nvmf_init_br" 00:30:13.048 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:30:13.048 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:13.048 Cannot find device "nvmf_init_br2" 00:30:13.048 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:30:13.048 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:13.048 Cannot find device "nvmf_tgt_br" 00:30:13.048 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:30:13.048 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:13.048 Cannot find device "nvmf_tgt_br2" 00:30:13.048 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:30:13.048 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:13.048 Cannot find device "nvmf_init_br" 00:30:13.048 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:30:13.048 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:13.048 Cannot find device "nvmf_init_br2" 00:30:13.048 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:30:13.048 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:13.048 Cannot find device "nvmf_tgt_br" 00:30:13.048 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:30:13.048 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:13.048 Cannot find device "nvmf_tgt_br2" 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:13.049 Cannot find device "nvmf_br" 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:13.049 Cannot find device "nvmf_init_if" 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:13.049 Cannot find device "nvmf_init_if2" 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:13.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:13.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:13.049 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:13.307 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:13.307 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:13.307 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:13.307 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:13.307 05:41:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:13.307 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:13.307 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:30:13.307 00:30:13.307 --- 10.0.0.3 ping statistics --- 00:30:13.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.307 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:13.307 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:13.307 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:30:13.307 00:30:13.307 --- 10.0.0.4 ping statistics --- 00:30:13.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.307 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:13.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:30:13.307 00:30:13.307 --- 10.0.0.1 ping statistics --- 00:30:13.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.307 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:13.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:30:13.307 00:30:13.307 --- 10.0.0.2 ping statistics --- 00:30:13.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.307 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:13.307 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:13.308 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:30:13.308 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:13.308 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:13.308 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:13.308 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=74078 00:30:13.308 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:13.308 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 74078 00:30:13.308 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 74078 ']' 00:30:13.308 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.308 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:13.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.308 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.308 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:13.308 05:41:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:13.308 [2024-11-20 05:41:33.161056] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:30:13.308 [2024-11-20 05:41:33.161145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.566 [2024-11-20 05:41:33.305562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:13.566 [2024-11-20 05:41:33.361759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.566 [2024-11-20 05:41:33.361802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.566 [2024-11-20 05:41:33.361814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.566 [2024-11-20 05:41:33.361824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.566 [2024-11-20 05:41:33.361833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.566 [2024-11-20 05:41:33.362798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.566 [2024-11-20 05:41:33.362919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:13.566 [2024-11-20 05:41:33.362926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.566 [2024-11-20 05:41:33.362884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:30:14.501 "poll_groups": [ 00:30:14.501 { 00:30:14.501 "admin_qpairs": 0, 00:30:14.501 "completed_nvme_io": 0, 00:30:14.501 "current_admin_qpairs": 0, 00:30:14.501 "current_io_qpairs": 0, 00:30:14.501 "io_qpairs": 0, 00:30:14.501 "name": "nvmf_tgt_poll_group_000", 00:30:14.501 "pending_bdev_io": 0, 00:30:14.501 "transports": [] 00:30:14.501 }, 00:30:14.501 { 00:30:14.501 "admin_qpairs": 0, 00:30:14.501 "completed_nvme_io": 0, 00:30:14.501 "current_admin_qpairs": 0, 00:30:14.501 "current_io_qpairs": 0, 00:30:14.501 "io_qpairs": 0, 00:30:14.501 "name": "nvmf_tgt_poll_group_001", 00:30:14.501 "pending_bdev_io": 0, 00:30:14.501 "transports": [] 00:30:14.501 }, 00:30:14.501 { 00:30:14.501 "admin_qpairs": 0, 00:30:14.501 "completed_nvme_io": 0, 00:30:14.501 "current_admin_qpairs": 0, 00:30:14.501 "current_io_qpairs": 0, 00:30:14.501 "io_qpairs": 0, 00:30:14.501 "name": "nvmf_tgt_poll_group_002", 00:30:14.501 "pending_bdev_io": 0, 00:30:14.501 "transports": [] 00:30:14.501 }, 00:30:14.501 { 00:30:14.501 "admin_qpairs": 0, 00:30:14.501 "completed_nvme_io": 0, 00:30:14.501 "current_admin_qpairs": 0, 00:30:14.501 "current_io_qpairs": 0, 00:30:14.501 "io_qpairs": 0, 00:30:14.501 "name": "nvmf_tgt_poll_group_003", 00:30:14.501 "pending_bdev_io": 0, 00:30:14.501 "transports": [] 00:30:14.501 } 00:30:14.501 ], 00:30:14.501 "tick_rate": 2100000000 00:30:14.501 }' 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:14.501 [2024-11-20 05:41:34.295471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.501 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:30:14.501 "poll_groups": [ 00:30:14.501 { 00:30:14.501 "admin_qpairs": 0, 00:30:14.501 "completed_nvme_io": 0, 00:30:14.501 "current_admin_qpairs": 0, 00:30:14.501 "current_io_qpairs": 0, 00:30:14.501 "io_qpairs": 0, 00:30:14.501 "name": "nvmf_tgt_poll_group_000", 00:30:14.501 "pending_bdev_io": 0, 00:30:14.501 "transports": [ 00:30:14.501 { 00:30:14.501 "trtype": "TCP" 00:30:14.501 } 00:30:14.501 ] 00:30:14.501 }, 00:30:14.501 { 00:30:14.501 "admin_qpairs": 0, 00:30:14.501 "completed_nvme_io": 0, 00:30:14.501 "current_admin_qpairs": 0, 00:30:14.501 "current_io_qpairs": 0, 00:30:14.501 "io_qpairs": 0, 00:30:14.501 "name": "nvmf_tgt_poll_group_001", 00:30:14.501 "pending_bdev_io": 0, 00:30:14.501 "transports": [ 00:30:14.501 { 00:30:14.501 "trtype": "TCP" 00:30:14.501 } 00:30:14.501 ] 00:30:14.501 }, 00:30:14.501 { 00:30:14.501 "admin_qpairs": 0, 00:30:14.501 "completed_nvme_io": 0, 00:30:14.501 "current_admin_qpairs": 0, 00:30:14.501 "current_io_qpairs": 0, 00:30:14.501 "io_qpairs": 0, 00:30:14.501 "name": "nvmf_tgt_poll_group_002", 00:30:14.501 "pending_bdev_io": 0, 00:30:14.501 "transports": [ 00:30:14.501 { 00:30:14.501 "trtype": "TCP" 00:30:14.501 } 00:30:14.501 ] 00:30:14.501 }, 00:30:14.501 { 00:30:14.501 "admin_qpairs": 0, 00:30:14.501 "completed_nvme_io": 0, 00:30:14.501 "current_admin_qpairs": 0, 00:30:14.502 "current_io_qpairs": 0, 00:30:14.502 "io_qpairs": 0, 00:30:14.502 "name": "nvmf_tgt_poll_group_003", 00:30:14.502 "pending_bdev_io": 0, 00:30:14.502 "transports": [ 00:30:14.502 { 00:30:14.502 "trtype": "TCP" 00:30:14.502 } 00:30:14.502 ] 00:30:14.502 } 00:30:14.502 ], 00:30:14.502 "tick_rate": 2100000000 00:30:14.502 }' 00:30:14.502 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:30:14.502 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:30:14.502 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:30:14.502 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:30:14.502 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:30:14.502 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:30:14.502 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:30:14.502 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:30:14.502 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:14.761 Malloc1 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:14.761 [2024-11-20 05:41:34.502079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -a 10.0.0.3 -s 4420 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -a 10.0.0.3 -s 4420 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -a 10.0.0.3 -s 4420 00:30:14.761 [2024-11-20 05:41:34.530574] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203' 00:30:14.761 Failed to write to /dev/nvme-fabrics: Input/output error 00:30:14.761 could not add new controller: failed to write to nvme-fabrics device 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.761 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:30:15.019 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:30:15.019 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:30:15.019 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:15.019 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:30:15.019 05:41:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:30:16.921 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:30:16.921 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:30:16.921 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:30:16.921 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:30:16.921 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:30:16.921 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:30:16.921 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:16.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:16.921 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:16.921 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:30:16.921 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:30:16.921 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:16.921 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:30:16.922 [2024-11-20 05:41:36.829732] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203' 00:30:16.922 Failed to write to /dev/nvme-fabrics: Input/output error 00:30:16.922 could not add new controller: failed to write to nvme-fabrics device 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:16.922 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:17.179 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.179 05:41:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:30:17.179 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:30:17.179 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:30:17.179 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:17.179 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:30:17.179 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:19.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:19.704 [2024-11-20 05:41:39.241565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:30:19.704 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:30:21.603 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:30:21.603 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:30:21.603 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:30:21.603 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:30:21.603 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:30:21.603 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:30:21.603 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:21.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:21.603 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:21.603 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:30:21.603 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:30:21.603 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:21.861 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:30:21.861 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:21.861 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:30:21.861 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:21.861 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.861 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:21.861 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.861 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:21.861 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.861 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:21.861 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.861 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:30:21.861 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:21.862 [2024-11-20 05:41:41.573964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:30:21.862 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:24.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:24.391 [2024-11-20 05:41:43.897469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.391 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:30:24.391 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:30:24.391 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:30:24.391 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:24.391 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:30:24.391 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:26.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.290 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:26.549 [2024-11-20 05:41:46.209356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:26.549 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.549 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:30:26.549 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.549 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:26.549 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.549 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:30:26.549 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.549 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:26.549 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.549 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:30:26.549 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:30:26.549 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:30:26.549 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:26.549 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:30:26.549 05:41:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:29.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:29.079 [2024-11-20 05:41:48.517277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:30:29.079 05:41:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:30.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:30.997 [2024-11-20 05:41:50.832632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:30:30.997 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:30.998 [2024-11-20 05:41:50.880668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.998 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 [2024-11-20 05:41:50.928766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 [2024-11-20 05:41:50.976815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 [2024-11-20 05:41:51.024859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.256 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:30:31.256 "poll_groups": [ 00:30:31.256 { 00:30:31.256 "admin_qpairs": 2, 00:30:31.256 "completed_nvme_io": 65, 00:30:31.256 "current_admin_qpairs": 0, 00:30:31.256 "current_io_qpairs": 0, 00:30:31.256 "io_qpairs": 16, 00:30:31.256 "name": "nvmf_tgt_poll_group_000", 00:30:31.256 "pending_bdev_io": 0, 00:30:31.256 "transports": [ 00:30:31.256 { 00:30:31.256 "trtype": "TCP" 00:30:31.256 } 00:30:31.256 ] 00:30:31.256 }, 00:30:31.256 { 00:30:31.256 "admin_qpairs": 3, 00:30:31.256 "completed_nvme_io": 116, 00:30:31.256 "current_admin_qpairs": 0, 00:30:31.256 "current_io_qpairs": 0, 00:30:31.256 "io_qpairs": 17, 00:30:31.256 "name": "nvmf_tgt_poll_group_001", 00:30:31.256 "pending_bdev_io": 0, 00:30:31.256 "transports": [ 00:30:31.256 { 00:30:31.256 "trtype": "TCP" 00:30:31.256 } 00:30:31.256 ] 00:30:31.256 }, 00:30:31.256 { 00:30:31.257 "admin_qpairs": 1, 00:30:31.257 "completed_nvme_io": 119, 00:30:31.257 "current_admin_qpairs": 0, 00:30:31.257 "current_io_qpairs": 0, 00:30:31.257 "io_qpairs": 19, 00:30:31.257 "name": "nvmf_tgt_poll_group_002", 00:30:31.257 "pending_bdev_io": 0, 00:30:31.257 "transports": [ 00:30:31.257 { 00:30:31.257 "trtype": "TCP" 00:30:31.257 } 00:30:31.257 ] 00:30:31.257 }, 00:30:31.257 { 00:30:31.257 "admin_qpairs": 1, 00:30:31.257 "completed_nvme_io": 120, 00:30:31.257 "current_admin_qpairs": 0, 00:30:31.257 "current_io_qpairs": 0, 00:30:31.257 "io_qpairs": 18, 00:30:31.257 "name": "nvmf_tgt_poll_group_003", 00:30:31.257 "pending_bdev_io": 0, 00:30:31.257 "transports": [ 00:30:31.257 { 00:30:31.257 "trtype": "TCP" 00:30:31.257 } 00:30:31.257 ] 00:30:31.257 } 00:30:31.257 ], 00:30:31.257 "tick_rate": 2100000000 00:30:31.257 }' 00:30:31.257 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:30:31.257 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:30:31.257 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:30:31.257 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:30:31.257 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:30:31.257 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:30:31.257 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:30:31.257 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:30:31.257 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:30:31.257 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:30:31.257 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:30:31.257 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:31.257 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:30:31.257 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:31.257 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:31.515 rmmod nvme_tcp 00:30:31.515 rmmod nvme_fabrics 00:30:31.515 rmmod nvme_keyring 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 74078 ']' 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 74078 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 74078 ']' 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 74078 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74078 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:31.515 killing process with pid 74078 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74078' 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 74078 00:30:31.515 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 74078 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:31.772 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:32.030 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:32.030 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.030 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.030 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.030 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:30:32.030 00:30:32.030 real 0m19.281s 00:30:32.031 user 1m10.112s 00:30:32.031 sys 0m3.981s 00:30:32.031 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:32.031 ************************************ 00:30:32.031 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:32.031 END TEST nvmf_rpc 00:30:32.031 ************************************ 00:30:32.031 05:41:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:30:32.031 05:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:32.031 05:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:32.031 05:41:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:32.031 ************************************ 00:30:32.031 START TEST nvmf_invalid 00:30:32.031 ************************************ 00:30:32.031 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:30:32.031 * Looking for test storage... 00:30:32.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:32.031 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:32.031 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:30:32.031 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:32.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.290 --rc genhtml_branch_coverage=1 00:30:32.290 --rc genhtml_function_coverage=1 00:30:32.290 --rc genhtml_legend=1 00:30:32.290 --rc geninfo_all_blocks=1 00:30:32.290 --rc geninfo_unexecuted_blocks=1 00:30:32.290 00:30:32.290 ' 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:32.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.290 --rc genhtml_branch_coverage=1 00:30:32.290 --rc genhtml_function_coverage=1 00:30:32.290 --rc genhtml_legend=1 00:30:32.290 --rc geninfo_all_blocks=1 00:30:32.290 --rc geninfo_unexecuted_blocks=1 00:30:32.290 00:30:32.290 ' 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:32.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.290 --rc genhtml_branch_coverage=1 00:30:32.290 --rc genhtml_function_coverage=1 00:30:32.290 --rc genhtml_legend=1 00:30:32.290 --rc geninfo_all_blocks=1 00:30:32.290 --rc geninfo_unexecuted_blocks=1 00:30:32.290 00:30:32.290 ' 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:32.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.290 --rc genhtml_branch_coverage=1 00:30:32.290 --rc genhtml_function_coverage=1 00:30:32.290 --rc genhtml_legend=1 00:30:32.290 --rc geninfo_all_blocks=1 00:30:32.290 --rc geninfo_unexecuted_blocks=1 00:30:32.290 00:30:32.290 ' 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.290 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.291 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.291 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:30:32.291 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.291 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:30:32.291 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:32.291 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:32.291 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.291 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.291 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.291 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:32.291 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:32.291 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:32.291 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:32.291 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:32.291 05:41:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:32.291 Cannot find device "nvmf_init_br" 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:32.291 Cannot find device "nvmf_init_br2" 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:32.291 Cannot find device "nvmf_tgt_br" 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:32.291 Cannot find device "nvmf_tgt_br2" 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:32.291 Cannot find device "nvmf_init_br" 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:32.291 Cannot find device "nvmf_init_br2" 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:32.291 Cannot find device "nvmf_tgt_br" 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:32.291 Cannot find device "nvmf_tgt_br2" 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:32.291 Cannot find device "nvmf_br" 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:32.291 Cannot find device "nvmf_init_if" 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:32.291 Cannot find device "nvmf_init_if2" 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:32.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:32.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:32.291 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:32.550 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:32.550 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:32.551 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:32.551 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:30:32.551 00:30:32.551 --- 10.0.0.3 ping statistics --- 00:30:32.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.551 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:32.551 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:32.551 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:30:32.551 00:30:32.551 --- 10.0.0.4 ping statistics --- 00:30:32.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.551 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:32.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:30:32.551 00:30:32.551 --- 10.0.0.1 ping statistics --- 00:30:32.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.551 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:32.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:30:32.551 00:30:32.551 --- 10.0.0.2 ping statistics --- 00:30:32.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.551 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=74648 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 74648 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 74648 ']' 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:32.551 05:41:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:30:32.810 [2024-11-20 05:41:52.507072] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:30:32.810 [2024-11-20 05:41:52.507154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.810 [2024-11-20 05:41:52.649245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:32.810 [2024-11-20 05:41:52.698800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.810 [2024-11-20 05:41:52.698853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.810 [2024-11-20 05:41:52.698864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.810 [2024-11-20 05:41:52.698872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.810 [2024-11-20 05:41:52.698879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.810 [2024-11-20 05:41:52.699818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.810 [2024-11-20 05:41:52.699921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:32.810 [2024-11-20 05:41:52.700041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.810 [2024-11-20 05:41:52.700043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:33.753 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:33.753 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:30:33.753 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:33.753 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:33.753 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:30:33.753 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.753 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:30:33.753 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27200 00:30:34.012 [2024-11-20 05:41:53.766363] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:30:34.012 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/11/20 05:41:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27200 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:30:34.012 request: 00:30:34.012 { 00:30:34.012 "method": "nvmf_create_subsystem", 00:30:34.012 "params": { 00:30:34.012 "nqn": "nqn.2016-06.io.spdk:cnode27200", 00:30:34.012 "tgt_name": "foobar" 00:30:34.012 } 00:30:34.012 } 00:30:34.012 Got JSON-RPC error response 00:30:34.012 GoRPCClient: error on JSON-RPC call' 00:30:34.012 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/11/20 05:41:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27200 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:30:34.012 request: 00:30:34.012 { 00:30:34.012 "method": "nvmf_create_subsystem", 00:30:34.012 "params": { 00:30:34.012 "nqn": "nqn.2016-06.io.spdk:cnode27200", 00:30:34.012 "tgt_name": "foobar" 00:30:34.012 } 00:30:34.012 } 00:30:34.012 Got JSON-RPC error response 00:30:34.012 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:30:34.012 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:30:34.012 05:41:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4825 00:30:34.271 [2024-11-20 05:41:54.074724] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4825: invalid serial number 'SPDKISFASTANDAWESOME' 00:30:34.271 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/11/20 05:41:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode4825 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:30:34.271 request: 00:30:34.271 { 00:30:34.271 "method": "nvmf_create_subsystem", 00:30:34.271 "params": { 00:30:34.271 "nqn": "nqn.2016-06.io.spdk:cnode4825", 00:30:34.271 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:30:34.271 } 00:30:34.271 } 00:30:34.271 Got JSON-RPC error response 00:30:34.271 GoRPCClient: error on JSON-RPC call' 00:30:34.271 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/11/20 05:41:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode4825 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:30:34.271 request: 00:30:34.271 { 00:30:34.271 "method": "nvmf_create_subsystem", 00:30:34.271 "params": { 00:30:34.271 "nqn": "nqn.2016-06.io.spdk:cnode4825", 00:30:34.271 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:30:34.271 } 00:30:34.271 } 00:30:34.271 Got JSON-RPC error response 00:30:34.271 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:30:34.271 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:30:34.271 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7314 00:30:34.529 [2024-11-20 05:41:54.286989] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7314: invalid model number 'SPDK_Controller' 00:30:34.529 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/11/20 05:41:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode7314], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:30:34.529 request: 00:30:34.529 { 00:30:34.529 "method": "nvmf_create_subsystem", 00:30:34.529 "params": { 00:30:34.529 "nqn": "nqn.2016-06.io.spdk:cnode7314", 00:30:34.529 "model_number": "SPDK_Controller\u001f" 00:30:34.529 } 00:30:34.529 } 00:30:34.529 Got JSON-RPC error response 00:30:34.529 GoRPCClient: error on JSON-RPC call' 00:30:34.529 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/11/20 05:41:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode7314], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:30:34.529 request: 00:30:34.529 { 00:30:34.529 "method": "nvmf_create_subsystem", 00:30:34.529 "params": { 00:30:34.529 "nqn": "nqn.2016-06.io.spdk:cnode7314", 00:30:34.529 "model_number": "SPDK_Controller\u001f" 00:30:34.529 } 00:30:34.529 } 00:30:34.529 Got JSON-RPC error response 00:30:34.529 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:30:34.529 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:30:34.529 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:30:34.530 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:30:34.789 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:30:34.789 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:30:34.789 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:30:34.789 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ & == \- ]] 00:30:34.789 05:41:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '&Uh?5;fO$ /dev/null' 00:30:38.208 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.208 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:30:38.208 00:30:38.208 real 0m6.128s 00:30:38.208 user 0m23.198s 00:30:38.208 sys 0m1.587s 00:30:38.208 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:38.208 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:30:38.208 ************************************ 00:30:38.208 END TEST nvmf_invalid 00:30:38.208 ************************************ 00:30:38.208 05:41:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:30:38.208 05:41:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:38.208 05:41:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:38.208 05:41:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:38.208 ************************************ 00:30:38.208 START TEST nvmf_connect_stress 00:30:38.208 ************************************ 00:30:38.208 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:30:38.208 * Looking for test storage... 00:30:38.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:38.208 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:38.208 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:30:38.208 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:38.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.468 --rc genhtml_branch_coverage=1 00:30:38.468 --rc genhtml_function_coverage=1 00:30:38.468 --rc genhtml_legend=1 00:30:38.468 --rc geninfo_all_blocks=1 00:30:38.468 --rc geninfo_unexecuted_blocks=1 00:30:38.468 00:30:38.468 ' 00:30:38.468 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:38.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.468 --rc genhtml_branch_coverage=1 00:30:38.469 --rc genhtml_function_coverage=1 00:30:38.469 --rc genhtml_legend=1 00:30:38.469 --rc geninfo_all_blocks=1 00:30:38.469 --rc geninfo_unexecuted_blocks=1 00:30:38.469 00:30:38.469 ' 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:38.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.469 --rc genhtml_branch_coverage=1 00:30:38.469 --rc genhtml_function_coverage=1 00:30:38.469 --rc genhtml_legend=1 00:30:38.469 --rc geninfo_all_blocks=1 00:30:38.469 --rc geninfo_unexecuted_blocks=1 00:30:38.469 00:30:38.469 ' 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:38.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.469 --rc genhtml_branch_coverage=1 00:30:38.469 --rc genhtml_function_coverage=1 00:30:38.469 --rc genhtml_legend=1 00:30:38.469 --rc geninfo_all_blocks=1 00:30:38.469 --rc geninfo_unexecuted_blocks=1 00:30:38.469 00:30:38.469 ' 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:38.469 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:38.469 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:38.470 Cannot find device "nvmf_init_br" 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:38.470 Cannot find device "nvmf_init_br2" 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:38.470 Cannot find device "nvmf_tgt_br" 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:38.470 Cannot find device "nvmf_tgt_br2" 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:38.470 Cannot find device "nvmf_init_br" 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:38.470 Cannot find device "nvmf_init_br2" 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:38.470 Cannot find device "nvmf_tgt_br" 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:38.470 Cannot find device "nvmf_tgt_br2" 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:38.470 Cannot find device "nvmf_br" 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:38.470 Cannot find device "nvmf_init_if" 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:38.470 Cannot find device "nvmf_init_if2" 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:38.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:38.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:38.470 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:38.729 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:38.729 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:30:38.729 00:30:38.729 --- 10.0.0.3 ping statistics --- 00:30:38.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.729 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:30:38.729 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:38.729 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:38.729 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:30:38.730 00:30:38.730 --- 10.0.0.4 ping statistics --- 00:30:38.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.730 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:30:38.730 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:38.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:30:38.730 00:30:38.730 --- 10.0.0.1 ping statistics --- 00:30:38.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.730 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:30:38.730 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:38.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:30:38.991 00:30:38.991 --- 10.0.0.2 ping statistics --- 00:30:38.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.991 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=75205 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 75205 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 75205 ']' 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:38.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:38.991 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:38.991 [2024-11-20 05:41:58.744526] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:30:38.991 [2024-11-20 05:41:58.744630] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.251 [2024-11-20 05:41:58.909492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:39.251 [2024-11-20 05:41:58.976026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.251 [2024-11-20 05:41:58.976079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.251 [2024-11-20 05:41:58.976094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.251 [2024-11-20 05:41:58.976107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.251 [2024-11-20 05:41:58.976118] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.251 [2024-11-20 05:41:58.979490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:39.251 [2024-11-20 05:41:58.979638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:39.251 [2024-11-20 05:41:58.979649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.815 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:39.815 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:30:39.815 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:39.815 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:39.815 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:40.074 [2024-11-20 05:41:59.771137] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:40.074 [2024-11-20 05:41:59.791283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:40.074 NULL1 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=75257 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.074 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.075 05:41:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:40.332 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.333 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:40.333 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:40.333 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.333 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:40.897 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.897 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:40.897 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:40.897 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.897 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:41.154 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.154 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:41.154 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:41.154 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.154 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:41.421 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.421 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:41.421 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:41.421 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.421 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:41.679 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.679 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:41.679 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:41.679 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.679 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:41.938 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.938 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:41.938 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:41.938 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.938 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:42.505 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.505 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:42.505 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:42.505 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.505 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:42.763 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.763 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:42.763 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:42.763 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.763 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:43.021 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.021 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:43.021 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:43.021 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.021 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:43.280 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.280 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:43.280 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:43.280 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.280 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:43.539 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.539 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:43.539 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:43.539 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.539 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:44.106 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.106 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:44.106 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:44.106 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.106 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:44.366 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.366 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:44.366 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:44.366 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.366 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:44.624 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.624 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:44.624 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:44.624 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.624 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:44.883 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.883 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:44.883 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:44.883 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.883 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:45.449 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.449 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:45.449 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:45.449 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.449 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:45.708 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.708 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:45.708 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:45.708 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.708 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:45.967 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.967 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:45.967 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:45.967 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.967 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:46.226 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.226 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:46.226 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:46.226 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.226 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:46.484 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.484 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:46.484 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:46.484 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.484 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:47.052 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.052 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:47.052 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:47.052 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.052 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:47.316 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.316 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:47.316 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:47.316 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.316 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:47.649 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.649 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:47.649 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:47.649 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.649 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:47.906 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.906 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:47.906 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:47.906 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.906 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:48.165 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.165 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:48.165 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:48.165 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.165 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:48.424 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.424 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:48.424 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:48.424 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.424 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:48.991 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.991 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:48.991 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:48.991 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.991 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:49.250 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.250 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:49.250 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:49.250 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.250 05:42:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:49.508 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.508 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:49.508 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:49.508 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.508 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:49.767 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.767 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:49.767 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:49.767 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.767 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:50.025 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.025 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:50.025 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:30:50.025 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.025 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:50.283 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75257 00:30:50.541 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (75257) - No such process 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 75257 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:50.541 rmmod nvme_tcp 00:30:50.541 rmmod nvme_fabrics 00:30:50.541 rmmod nvme_keyring 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 75205 ']' 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 75205 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 75205 ']' 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 75205 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75205 00:30:50.541 killing process with pid 75205 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75205' 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 75205 00:30:50.541 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 75205 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:50.798 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:30:51.057 00:30:51.057 real 0m12.828s 00:30:51.057 user 0m40.470s 00:30:51.057 sys 0m4.672s 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:51.057 ************************************ 00:30:51.057 END TEST nvmf_connect_stress 00:30:51.057 ************************************ 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:51.057 ************************************ 00:30:51.057 START TEST nvmf_fused_ordering 00:30:51.057 ************************************ 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:30:51.057 * Looking for test storage... 00:30:51.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:30:51.057 05:42:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:51.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.316 --rc genhtml_branch_coverage=1 00:30:51.316 --rc genhtml_function_coverage=1 00:30:51.316 --rc genhtml_legend=1 00:30:51.316 --rc geninfo_all_blocks=1 00:30:51.316 --rc geninfo_unexecuted_blocks=1 00:30:51.316 00:30:51.316 ' 00:30:51.316 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:51.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.317 --rc genhtml_branch_coverage=1 00:30:51.317 --rc genhtml_function_coverage=1 00:30:51.317 --rc genhtml_legend=1 00:30:51.317 --rc geninfo_all_blocks=1 00:30:51.317 --rc geninfo_unexecuted_blocks=1 00:30:51.317 00:30:51.317 ' 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:51.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.317 --rc genhtml_branch_coverage=1 00:30:51.317 --rc genhtml_function_coverage=1 00:30:51.317 --rc genhtml_legend=1 00:30:51.317 --rc geninfo_all_blocks=1 00:30:51.317 --rc geninfo_unexecuted_blocks=1 00:30:51.317 00:30:51.317 ' 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:51.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.317 --rc genhtml_branch_coverage=1 00:30:51.317 --rc genhtml_function_coverage=1 00:30:51.317 --rc genhtml_legend=1 00:30:51.317 --rc geninfo_all_blocks=1 00:30:51.317 --rc geninfo_unexecuted_blocks=1 00:30:51.317 00:30:51.317 ' 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:51.317 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:51.317 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:51.318 Cannot find device "nvmf_init_br" 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:51.318 Cannot find device "nvmf_init_br2" 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:51.318 Cannot find device "nvmf_tgt_br" 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:51.318 Cannot find device "nvmf_tgt_br2" 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:51.318 Cannot find device "nvmf_init_br" 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:51.318 Cannot find device "nvmf_init_br2" 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:51.318 Cannot find device "nvmf_tgt_br" 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:51.318 Cannot find device "nvmf_tgt_br2" 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:51.318 Cannot find device "nvmf_br" 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:30:51.318 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:51.577 Cannot find device "nvmf_init_if" 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:51.577 Cannot find device "nvmf_init_if2" 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:51.577 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:51.577 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:51.577 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:51.835 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:51.835 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:30:51.835 00:30:51.835 --- 10.0.0.3 ping statistics --- 00:30:51.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.835 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:51.835 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:51.835 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:30:51.835 00:30:51.835 --- 10.0.0.4 ping statistics --- 00:30:51.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.835 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:51.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:30:51.835 00:30:51.835 --- 10.0.0.1 ping statistics --- 00:30:51.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.835 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:51.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:30:51.835 00:30:51.835 --- 10.0.0.2 ping statistics --- 00:30:51.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.835 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:51.835 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.836 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:51.836 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:51.836 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:30:51.836 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:51.836 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:51.836 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:30:51.836 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=75642 00:30:51.836 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 75642 00:30:51.836 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 75642 ']' 00:30:51.836 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.836 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:51.836 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:51.836 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.836 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:51.836 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:30:51.836 [2024-11-20 05:42:11.617077] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:30:51.836 [2024-11-20 05:42:11.617156] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:52.094 [2024-11-20 05:42:11.765143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.094 [2024-11-20 05:42:11.819107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.094 [2024-11-20 05:42:11.819161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.094 [2024-11-20 05:42:11.819172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:52.094 [2024-11-20 05:42:11.819181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:52.094 [2024-11-20 05:42:11.819188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.094 [2024-11-20 05:42:11.819528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.094 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:52.094 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:30:52.094 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:52.094 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:52.094 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:30:52.094 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.094 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:52.094 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.094 05:42:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:30:52.094 [2024-11-20 05:42:11.998830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.094 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.094 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:52.094 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.094 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:30:52.094 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.094 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:52.353 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.354 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:30:52.354 [2024-11-20 05:42:12.014938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:52.354 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.354 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:52.354 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.354 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:30:52.354 NULL1 00:30:52.354 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.354 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:30:52.354 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.354 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:30:52.354 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.354 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:52.354 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.354 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:30:52.354 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.354 05:42:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:52.354 [2024-11-20 05:42:12.070063] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:30:52.354 [2024-11-20 05:42:12.070103] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75673 ] 00:30:52.639 Attached to nqn.2016-06.io.spdk:cnode1 00:30:52.639 Namespace ID: 1 size: 1GB 00:30:52.639 fused_ordering(0) 00:30:52.639 fused_ordering(1) 00:30:52.639 fused_ordering(2) 00:30:52.639 fused_ordering(3) 00:30:52.639 fused_ordering(4) 00:30:52.639 fused_ordering(5) 00:30:52.639 fused_ordering(6) 00:30:52.639 fused_ordering(7) 00:30:52.639 fused_ordering(8) 00:30:52.639 fused_ordering(9) 00:30:52.639 fused_ordering(10) 00:30:52.639 fused_ordering(11) 00:30:52.639 fused_ordering(12) 00:30:52.639 fused_ordering(13) 00:30:52.639 fused_ordering(14) 00:30:52.639 fused_ordering(15) 00:30:52.639 fused_ordering(16) 00:30:52.639 fused_ordering(17) 00:30:52.639 fused_ordering(18) 00:30:52.639 fused_ordering(19) 00:30:52.639 fused_ordering(20) 00:30:52.639 fused_ordering(21) 00:30:52.639 fused_ordering(22) 00:30:52.639 fused_ordering(23) 00:30:52.639 fused_ordering(24) 00:30:52.639 fused_ordering(25) 00:30:52.639 fused_ordering(26) 00:30:52.639 fused_ordering(27) 00:30:52.639 fused_ordering(28) 00:30:52.639 fused_ordering(29) 00:30:52.639 fused_ordering(30) 00:30:52.639 fused_ordering(31) 00:30:52.639 fused_ordering(32) 00:30:52.639 fused_ordering(33) 00:30:52.639 fused_ordering(34) 00:30:52.639 fused_ordering(35) 00:30:52.639 fused_ordering(36) 00:30:52.639 fused_ordering(37) 00:30:52.639 fused_ordering(38) 00:30:52.639 fused_ordering(39) 00:30:52.639 fused_ordering(40) 00:30:52.639 fused_ordering(41) 00:30:52.639 fused_ordering(42) 00:30:52.639 fused_ordering(43) 00:30:52.639 fused_ordering(44) 00:30:52.639 fused_ordering(45) 00:30:52.639 fused_ordering(46) 00:30:52.639 fused_ordering(47) 00:30:52.639 fused_ordering(48) 00:30:52.639 fused_ordering(49) 00:30:52.639 fused_ordering(50) 00:30:52.639 fused_ordering(51) 00:30:52.639 fused_ordering(52) 00:30:52.639 fused_ordering(53) 00:30:52.639 fused_ordering(54) 00:30:52.639 fused_ordering(55) 00:30:52.639 fused_ordering(56) 00:30:52.639 fused_ordering(57) 00:30:52.639 fused_ordering(58) 00:30:52.639 fused_ordering(59) 00:30:52.639 fused_ordering(60) 00:30:52.639 fused_ordering(61) 00:30:52.639 fused_ordering(62) 00:30:52.639 fused_ordering(63) 00:30:52.639 fused_ordering(64) 00:30:52.639 fused_ordering(65) 00:30:52.639 fused_ordering(66) 00:30:52.639 fused_ordering(67) 00:30:52.639 fused_ordering(68) 00:30:52.639 fused_ordering(69) 00:30:52.639 fused_ordering(70) 00:30:52.639 fused_ordering(71) 00:30:52.639 fused_ordering(72) 00:30:52.639 fused_ordering(73) 00:30:52.639 fused_ordering(74) 00:30:52.639 fused_ordering(75) 00:30:52.639 fused_ordering(76) 00:30:52.639 fused_ordering(77) 00:30:52.639 fused_ordering(78) 00:30:52.639 fused_ordering(79) 00:30:52.639 fused_ordering(80) 00:30:52.639 fused_ordering(81) 00:30:52.639 fused_ordering(82) 00:30:52.639 fused_ordering(83) 00:30:52.639 fused_ordering(84) 00:30:52.639 fused_ordering(85) 00:30:52.639 fused_ordering(86) 00:30:52.639 fused_ordering(87) 00:30:52.639 fused_ordering(88) 00:30:52.639 fused_ordering(89) 00:30:52.639 fused_ordering(90) 00:30:52.639 fused_ordering(91) 00:30:52.639 fused_ordering(92) 00:30:52.639 fused_ordering(93) 00:30:52.639 fused_ordering(94) 00:30:52.639 fused_ordering(95) 00:30:52.639 fused_ordering(96) 00:30:52.639 fused_ordering(97) 00:30:52.639 fused_ordering(98) 00:30:52.639 fused_ordering(99) 00:30:52.639 fused_ordering(100) 00:30:52.639 fused_ordering(101) 00:30:52.639 fused_ordering(102) 00:30:52.639 fused_ordering(103) 00:30:52.639 fused_ordering(104) 00:30:52.639 fused_ordering(105) 00:30:52.639 fused_ordering(106) 00:30:52.639 fused_ordering(107) 00:30:52.639 fused_ordering(108) 00:30:52.639 fused_ordering(109) 00:30:52.639 fused_ordering(110) 00:30:52.639 fused_ordering(111) 00:30:52.639 fused_ordering(112) 00:30:52.639 fused_ordering(113) 00:30:52.639 fused_ordering(114) 00:30:52.639 fused_ordering(115) 00:30:52.639 fused_ordering(116) 00:30:52.639 fused_ordering(117) 00:30:52.639 fused_ordering(118) 00:30:52.639 fused_ordering(119) 00:30:52.639 fused_ordering(120) 00:30:52.639 fused_ordering(121) 00:30:52.639 fused_ordering(122) 00:30:52.639 fused_ordering(123) 00:30:52.639 fused_ordering(124) 00:30:52.639 fused_ordering(125) 00:30:52.639 fused_ordering(126) 00:30:52.639 fused_ordering(127) 00:30:52.639 fused_ordering(128) 00:30:52.639 fused_ordering(129) 00:30:52.639 fused_ordering(130) 00:30:52.639 fused_ordering(131) 00:30:52.639 fused_ordering(132) 00:30:52.639 fused_ordering(133) 00:30:52.639 fused_ordering(134) 00:30:52.639 fused_ordering(135) 00:30:52.639 fused_ordering(136) 00:30:52.639 fused_ordering(137) 00:30:52.639 fused_ordering(138) 00:30:52.639 fused_ordering(139) 00:30:52.639 fused_ordering(140) 00:30:52.639 fused_ordering(141) 00:30:52.639 fused_ordering(142) 00:30:52.639 fused_ordering(143) 00:30:52.639 fused_ordering(144) 00:30:52.639 fused_ordering(145) 00:30:52.639 fused_ordering(146) 00:30:52.639 fused_ordering(147) 00:30:52.639 fused_ordering(148) 00:30:52.639 fused_ordering(149) 00:30:52.639 fused_ordering(150) 00:30:52.639 fused_ordering(151) 00:30:52.639 fused_ordering(152) 00:30:52.639 fused_ordering(153) 00:30:52.639 fused_ordering(154) 00:30:52.639 fused_ordering(155) 00:30:52.639 fused_ordering(156) 00:30:52.639 fused_ordering(157) 00:30:52.639 fused_ordering(158) 00:30:52.639 fused_ordering(159) 00:30:52.639 fused_ordering(160) 00:30:52.639 fused_ordering(161) 00:30:52.639 fused_ordering(162) 00:30:52.639 fused_ordering(163) 00:30:52.639 fused_ordering(164) 00:30:52.639 fused_ordering(165) 00:30:52.639 fused_ordering(166) 00:30:52.639 fused_ordering(167) 00:30:52.639 fused_ordering(168) 00:30:52.639 fused_ordering(169) 00:30:52.639 fused_ordering(170) 00:30:52.639 fused_ordering(171) 00:30:52.639 fused_ordering(172) 00:30:52.639 fused_ordering(173) 00:30:52.639 fused_ordering(174) 00:30:52.639 fused_ordering(175) 00:30:52.639 fused_ordering(176) 00:30:52.639 fused_ordering(177) 00:30:52.639 fused_ordering(178) 00:30:52.639 fused_ordering(179) 00:30:52.639 fused_ordering(180) 00:30:52.639 fused_ordering(181) 00:30:52.639 fused_ordering(182) 00:30:52.639 fused_ordering(183) 00:30:52.639 fused_ordering(184) 00:30:52.639 fused_ordering(185) 00:30:52.639 fused_ordering(186) 00:30:52.639 fused_ordering(187) 00:30:52.639 fused_ordering(188) 00:30:52.639 fused_ordering(189) 00:30:52.639 fused_ordering(190) 00:30:52.639 fused_ordering(191) 00:30:52.639 fused_ordering(192) 00:30:52.639 fused_ordering(193) 00:30:52.639 fused_ordering(194) 00:30:52.639 fused_ordering(195) 00:30:52.639 fused_ordering(196) 00:30:52.639 fused_ordering(197) 00:30:52.639 fused_ordering(198) 00:30:52.639 fused_ordering(199) 00:30:52.639 fused_ordering(200) 00:30:52.639 fused_ordering(201) 00:30:52.639 fused_ordering(202) 00:30:52.639 fused_ordering(203) 00:30:52.639 fused_ordering(204) 00:30:52.639 fused_ordering(205) 00:30:52.898 fused_ordering(206) 00:30:52.898 fused_ordering(207) 00:30:52.898 fused_ordering(208) 00:30:52.898 fused_ordering(209) 00:30:52.898 fused_ordering(210) 00:30:52.898 fused_ordering(211) 00:30:52.898 fused_ordering(212) 00:30:52.898 fused_ordering(213) 00:30:52.898 fused_ordering(214) 00:30:52.898 fused_ordering(215) 00:30:52.898 fused_ordering(216) 00:30:52.898 fused_ordering(217) 00:30:52.898 fused_ordering(218) 00:30:52.898 fused_ordering(219) 00:30:52.898 fused_ordering(220) 00:30:52.898 fused_ordering(221) 00:30:52.898 fused_ordering(222) 00:30:52.898 fused_ordering(223) 00:30:52.898 fused_ordering(224) 00:30:52.898 fused_ordering(225) 00:30:52.898 fused_ordering(226) 00:30:52.898 fused_ordering(227) 00:30:52.898 fused_ordering(228) 00:30:52.898 fused_ordering(229) 00:30:52.898 fused_ordering(230) 00:30:52.898 fused_ordering(231) 00:30:52.898 fused_ordering(232) 00:30:52.898 fused_ordering(233) 00:30:52.898 fused_ordering(234) 00:30:52.898 fused_ordering(235) 00:30:52.898 fused_ordering(236) 00:30:52.898 fused_ordering(237) 00:30:52.898 fused_ordering(238) 00:30:52.898 fused_ordering(239) 00:30:52.898 fused_ordering(240) 00:30:52.898 fused_ordering(241) 00:30:52.898 fused_ordering(242) 00:30:52.898 fused_ordering(243) 00:30:52.898 fused_ordering(244) 00:30:52.898 fused_ordering(245) 00:30:52.898 fused_ordering(246) 00:30:52.898 fused_ordering(247) 00:30:52.898 fused_ordering(248) 00:30:52.898 fused_ordering(249) 00:30:52.898 fused_ordering(250) 00:30:52.898 fused_ordering(251) 00:30:52.898 fused_ordering(252) 00:30:52.898 fused_ordering(253) 00:30:52.898 fused_ordering(254) 00:30:52.898 fused_ordering(255) 00:30:52.898 fused_ordering(256) 00:30:52.898 fused_ordering(257) 00:30:52.898 fused_ordering(258) 00:30:52.898 fused_ordering(259) 00:30:52.898 fused_ordering(260) 00:30:52.898 fused_ordering(261) 00:30:52.898 fused_ordering(262) 00:30:52.898 fused_ordering(263) 00:30:52.898 fused_ordering(264) 00:30:52.899 fused_ordering(265) 00:30:52.899 fused_ordering(266) 00:30:52.899 fused_ordering(267) 00:30:52.899 fused_ordering(268) 00:30:52.899 fused_ordering(269) 00:30:52.899 fused_ordering(270) 00:30:52.899 fused_ordering(271) 00:30:52.899 fused_ordering(272) 00:30:52.899 fused_ordering(273) 00:30:52.899 fused_ordering(274) 00:30:52.899 fused_ordering(275) 00:30:52.899 fused_ordering(276) 00:30:52.899 fused_ordering(277) 00:30:52.899 fused_ordering(278) 00:30:52.899 fused_ordering(279) 00:30:52.899 fused_ordering(280) 00:30:52.899 fused_ordering(281) 00:30:52.899 fused_ordering(282) 00:30:52.899 fused_ordering(283) 00:30:52.899 fused_ordering(284) 00:30:52.899 fused_ordering(285) 00:30:52.899 fused_ordering(286) 00:30:52.899 fused_ordering(287) 00:30:52.899 fused_ordering(288) 00:30:52.899 fused_ordering(289) 00:30:52.899 fused_ordering(290) 00:30:52.899 fused_ordering(291) 00:30:52.899 fused_ordering(292) 00:30:52.899 fused_ordering(293) 00:30:52.899 fused_ordering(294) 00:30:52.899 fused_ordering(295) 00:30:52.899 fused_ordering(296) 00:30:52.899 fused_ordering(297) 00:30:52.899 fused_ordering(298) 00:30:52.899 fused_ordering(299) 00:30:52.899 fused_ordering(300) 00:30:52.899 fused_ordering(301) 00:30:52.899 fused_ordering(302) 00:30:52.899 fused_ordering(303) 00:30:52.899 fused_ordering(304) 00:30:52.899 fused_ordering(305) 00:30:52.899 fused_ordering(306) 00:30:52.899 fused_ordering(307) 00:30:52.899 fused_ordering(308) 00:30:52.899 fused_ordering(309) 00:30:52.899 fused_ordering(310) 00:30:52.899 fused_ordering(311) 00:30:52.899 fused_ordering(312) 00:30:52.899 fused_ordering(313) 00:30:52.899 fused_ordering(314) 00:30:52.899 fused_ordering(315) 00:30:52.899 fused_ordering(316) 00:30:52.899 fused_ordering(317) 00:30:52.899 fused_ordering(318) 00:30:52.899 fused_ordering(319) 00:30:52.899 fused_ordering(320) 00:30:52.899 fused_ordering(321) 00:30:52.899 fused_ordering(322) 00:30:52.899 fused_ordering(323) 00:30:52.899 fused_ordering(324) 00:30:52.899 fused_ordering(325) 00:30:52.899 fused_ordering(326) 00:30:52.899 fused_ordering(327) 00:30:52.899 fused_ordering(328) 00:30:52.899 fused_ordering(329) 00:30:52.899 fused_ordering(330) 00:30:52.899 fused_ordering(331) 00:30:52.899 fused_ordering(332) 00:30:52.899 fused_ordering(333) 00:30:52.899 fused_ordering(334) 00:30:52.899 fused_ordering(335) 00:30:52.899 fused_ordering(336) 00:30:52.899 fused_ordering(337) 00:30:52.899 fused_ordering(338) 00:30:52.899 fused_ordering(339) 00:30:52.899 fused_ordering(340) 00:30:52.899 fused_ordering(341) 00:30:52.899 fused_ordering(342) 00:30:52.899 fused_ordering(343) 00:30:52.899 fused_ordering(344) 00:30:52.899 fused_ordering(345) 00:30:52.899 fused_ordering(346) 00:30:52.899 fused_ordering(347) 00:30:52.899 fused_ordering(348) 00:30:52.899 fused_ordering(349) 00:30:52.899 fused_ordering(350) 00:30:52.899 fused_ordering(351) 00:30:52.899 fused_ordering(352) 00:30:52.899 fused_ordering(353) 00:30:52.899 fused_ordering(354) 00:30:52.899 fused_ordering(355) 00:30:52.899 fused_ordering(356) 00:30:52.899 fused_ordering(357) 00:30:52.899 fused_ordering(358) 00:30:52.899 fused_ordering(359) 00:30:52.899 fused_ordering(360) 00:30:52.899 fused_ordering(361) 00:30:52.899 fused_ordering(362) 00:30:52.899 fused_ordering(363) 00:30:52.899 fused_ordering(364) 00:30:52.899 fused_ordering(365) 00:30:52.899 fused_ordering(366) 00:30:52.899 fused_ordering(367) 00:30:52.899 fused_ordering(368) 00:30:52.899 fused_ordering(369) 00:30:52.899 fused_ordering(370) 00:30:52.899 fused_ordering(371) 00:30:52.899 fused_ordering(372) 00:30:52.899 fused_ordering(373) 00:30:52.899 fused_ordering(374) 00:30:52.899 fused_ordering(375) 00:30:52.899 fused_ordering(376) 00:30:52.899 fused_ordering(377) 00:30:52.899 fused_ordering(378) 00:30:52.899 fused_ordering(379) 00:30:52.899 fused_ordering(380) 00:30:52.899 fused_ordering(381) 00:30:52.899 fused_ordering(382) 00:30:52.899 fused_ordering(383) 00:30:52.899 fused_ordering(384) 00:30:52.899 fused_ordering(385) 00:30:52.899 fused_ordering(386) 00:30:52.899 fused_ordering(387) 00:30:52.899 fused_ordering(388) 00:30:52.899 fused_ordering(389) 00:30:52.899 fused_ordering(390) 00:30:52.899 fused_ordering(391) 00:30:52.899 fused_ordering(392) 00:30:52.899 fused_ordering(393) 00:30:52.899 fused_ordering(394) 00:30:52.899 fused_ordering(395) 00:30:52.899 fused_ordering(396) 00:30:52.899 fused_ordering(397) 00:30:52.899 fused_ordering(398) 00:30:52.899 fused_ordering(399) 00:30:52.899 fused_ordering(400) 00:30:52.899 fused_ordering(401) 00:30:52.899 fused_ordering(402) 00:30:52.899 fused_ordering(403) 00:30:52.899 fused_ordering(404) 00:30:52.899 fused_ordering(405) 00:30:52.899 fused_ordering(406) 00:30:52.899 fused_ordering(407) 00:30:52.899 fused_ordering(408) 00:30:52.899 fused_ordering(409) 00:30:52.899 fused_ordering(410) 00:30:53.467 fused_ordering(411) 00:30:53.467 fused_ordering(412) 00:30:53.467 fused_ordering(413) 00:30:53.467 fused_ordering(414) 00:30:53.467 fused_ordering(415) 00:30:53.467 fused_ordering(416) 00:30:53.467 fused_ordering(417) 00:30:53.467 fused_ordering(418) 00:30:53.467 fused_ordering(419) 00:30:53.467 fused_ordering(420) 00:30:53.467 fused_ordering(421) 00:30:53.467 fused_ordering(422) 00:30:53.467 fused_ordering(423) 00:30:53.467 fused_ordering(424) 00:30:53.467 fused_ordering(425) 00:30:53.467 fused_ordering(426) 00:30:53.467 fused_ordering(427) 00:30:53.467 fused_ordering(428) 00:30:53.467 fused_ordering(429) 00:30:53.467 fused_ordering(430) 00:30:53.467 fused_ordering(431) 00:30:53.467 fused_ordering(432) 00:30:53.467 fused_ordering(433) 00:30:53.467 fused_ordering(434) 00:30:53.467 fused_ordering(435) 00:30:53.467 fused_ordering(436) 00:30:53.467 fused_ordering(437) 00:30:53.467 fused_ordering(438) 00:30:53.467 fused_ordering(439) 00:30:53.467 fused_ordering(440) 00:30:53.467 fused_ordering(441) 00:30:53.467 fused_ordering(442) 00:30:53.467 fused_ordering(443) 00:30:53.467 fused_ordering(444) 00:30:53.467 fused_ordering(445) 00:30:53.467 fused_ordering(446) 00:30:53.467 fused_ordering(447) 00:30:53.467 fused_ordering(448) 00:30:53.467 fused_ordering(449) 00:30:53.467 fused_ordering(450) 00:30:53.467 fused_ordering(451) 00:30:53.467 fused_ordering(452) 00:30:53.467 fused_ordering(453) 00:30:53.467 fused_ordering(454) 00:30:53.467 fused_ordering(455) 00:30:53.467 fused_ordering(456) 00:30:53.467 fused_ordering(457) 00:30:53.467 fused_ordering(458) 00:30:53.467 fused_ordering(459) 00:30:53.467 fused_ordering(460) 00:30:53.467 fused_ordering(461) 00:30:53.467 fused_ordering(462) 00:30:53.467 fused_ordering(463) 00:30:53.467 fused_ordering(464) 00:30:53.467 fused_ordering(465) 00:30:53.467 fused_ordering(466) 00:30:53.467 fused_ordering(467) 00:30:53.467 fused_ordering(468) 00:30:53.467 fused_ordering(469) 00:30:53.467 fused_ordering(470) 00:30:53.467 fused_ordering(471) 00:30:53.467 fused_ordering(472) 00:30:53.467 fused_ordering(473) 00:30:53.467 fused_ordering(474) 00:30:53.467 fused_ordering(475) 00:30:53.467 fused_ordering(476) 00:30:53.467 fused_ordering(477) 00:30:53.467 fused_ordering(478) 00:30:53.467 fused_ordering(479) 00:30:53.467 fused_ordering(480) 00:30:53.467 fused_ordering(481) 00:30:53.467 fused_ordering(482) 00:30:53.467 fused_ordering(483) 00:30:53.467 fused_ordering(484) 00:30:53.467 fused_ordering(485) 00:30:53.467 fused_ordering(486) 00:30:53.467 fused_ordering(487) 00:30:53.467 fused_ordering(488) 00:30:53.467 fused_ordering(489) 00:30:53.467 fused_ordering(490) 00:30:53.467 fused_ordering(491) 00:30:53.467 fused_ordering(492) 00:30:53.467 fused_ordering(493) 00:30:53.467 fused_ordering(494) 00:30:53.467 fused_ordering(495) 00:30:53.467 fused_ordering(496) 00:30:53.467 fused_ordering(497) 00:30:53.467 fused_ordering(498) 00:30:53.467 fused_ordering(499) 00:30:53.467 fused_ordering(500) 00:30:53.467 fused_ordering(501) 00:30:53.467 fused_ordering(502) 00:30:53.467 fused_ordering(503) 00:30:53.467 fused_ordering(504) 00:30:53.467 fused_ordering(505) 00:30:53.468 fused_ordering(506) 00:30:53.468 fused_ordering(507) 00:30:53.468 fused_ordering(508) 00:30:53.468 fused_ordering(509) 00:30:53.468 fused_ordering(510) 00:30:53.468 fused_ordering(511) 00:30:53.468 fused_ordering(512) 00:30:53.468 fused_ordering(513) 00:30:53.468 fused_ordering(514) 00:30:53.468 fused_ordering(515) 00:30:53.468 fused_ordering(516) 00:30:53.468 fused_ordering(517) 00:30:53.468 fused_ordering(518) 00:30:53.468 fused_ordering(519) 00:30:53.468 fused_ordering(520) 00:30:53.468 fused_ordering(521) 00:30:53.468 fused_ordering(522) 00:30:53.468 fused_ordering(523) 00:30:53.468 fused_ordering(524) 00:30:53.468 fused_ordering(525) 00:30:53.468 fused_ordering(526) 00:30:53.468 fused_ordering(527) 00:30:53.468 fused_ordering(528) 00:30:53.468 fused_ordering(529) 00:30:53.468 fused_ordering(530) 00:30:53.468 fused_ordering(531) 00:30:53.468 fused_ordering(532) 00:30:53.468 fused_ordering(533) 00:30:53.468 fused_ordering(534) 00:30:53.468 fused_ordering(535) 00:30:53.468 fused_ordering(536) 00:30:53.468 fused_ordering(537) 00:30:53.468 fused_ordering(538) 00:30:53.468 fused_ordering(539) 00:30:53.468 fused_ordering(540) 00:30:53.468 fused_ordering(541) 00:30:53.468 fused_ordering(542) 00:30:53.468 fused_ordering(543) 00:30:53.468 fused_ordering(544) 00:30:53.468 fused_ordering(545) 00:30:53.468 fused_ordering(546) 00:30:53.468 fused_ordering(547) 00:30:53.468 fused_ordering(548) 00:30:53.468 fused_ordering(549) 00:30:53.468 fused_ordering(550) 00:30:53.468 fused_ordering(551) 00:30:53.468 fused_ordering(552) 00:30:53.468 fused_ordering(553) 00:30:53.468 fused_ordering(554) 00:30:53.468 fused_ordering(555) 00:30:53.468 fused_ordering(556) 00:30:53.468 fused_ordering(557) 00:30:53.468 fused_ordering(558) 00:30:53.468 fused_ordering(559) 00:30:53.468 fused_ordering(560) 00:30:53.468 fused_ordering(561) 00:30:53.468 fused_ordering(562) 00:30:53.468 fused_ordering(563) 00:30:53.468 fused_ordering(564) 00:30:53.468 fused_ordering(565) 00:30:53.468 fused_ordering(566) 00:30:53.468 fused_ordering(567) 00:30:53.468 fused_ordering(568) 00:30:53.468 fused_ordering(569) 00:30:53.468 fused_ordering(570) 00:30:53.468 fused_ordering(571) 00:30:53.468 fused_ordering(572) 00:30:53.468 fused_ordering(573) 00:30:53.468 fused_ordering(574) 00:30:53.468 fused_ordering(575) 00:30:53.468 fused_ordering(576) 00:30:53.468 fused_ordering(577) 00:30:53.468 fused_ordering(578) 00:30:53.468 fused_ordering(579) 00:30:53.468 fused_ordering(580) 00:30:53.468 fused_ordering(581) 00:30:53.468 fused_ordering(582) 00:30:53.468 fused_ordering(583) 00:30:53.468 fused_ordering(584) 00:30:53.468 fused_ordering(585) 00:30:53.468 fused_ordering(586) 00:30:53.468 fused_ordering(587) 00:30:53.468 fused_ordering(588) 00:30:53.468 fused_ordering(589) 00:30:53.468 fused_ordering(590) 00:30:53.468 fused_ordering(591) 00:30:53.468 fused_ordering(592) 00:30:53.468 fused_ordering(593) 00:30:53.468 fused_ordering(594) 00:30:53.468 fused_ordering(595) 00:30:53.468 fused_ordering(596) 00:30:53.468 fused_ordering(597) 00:30:53.468 fused_ordering(598) 00:30:53.468 fused_ordering(599) 00:30:53.468 fused_ordering(600) 00:30:53.468 fused_ordering(601) 00:30:53.468 fused_ordering(602) 00:30:53.468 fused_ordering(603) 00:30:53.468 fused_ordering(604) 00:30:53.468 fused_ordering(605) 00:30:53.468 fused_ordering(606) 00:30:53.468 fused_ordering(607) 00:30:53.468 fused_ordering(608) 00:30:53.468 fused_ordering(609) 00:30:53.468 fused_ordering(610) 00:30:53.468 fused_ordering(611) 00:30:53.468 fused_ordering(612) 00:30:53.468 fused_ordering(613) 00:30:53.468 fused_ordering(614) 00:30:53.468 fused_ordering(615) 00:30:53.727 fused_ordering(616) 00:30:53.727 fused_ordering(617) 00:30:53.727 fused_ordering(618) 00:30:53.727 fused_ordering(619) 00:30:53.727 fused_ordering(620) 00:30:53.727 fused_ordering(621) 00:30:53.727 fused_ordering(622) 00:30:53.727 fused_ordering(623) 00:30:53.727 fused_ordering(624) 00:30:53.727 fused_ordering(625) 00:30:53.727 fused_ordering(626) 00:30:53.727 fused_ordering(627) 00:30:53.727 fused_ordering(628) 00:30:53.727 fused_ordering(629) 00:30:53.727 fused_ordering(630) 00:30:53.727 fused_ordering(631) 00:30:53.727 fused_ordering(632) 00:30:53.727 fused_ordering(633) 00:30:53.727 fused_ordering(634) 00:30:53.727 fused_ordering(635) 00:30:53.727 fused_ordering(636) 00:30:53.727 fused_ordering(637) 00:30:53.727 fused_ordering(638) 00:30:53.727 fused_ordering(639) 00:30:53.727 fused_ordering(640) 00:30:53.727 fused_ordering(641) 00:30:53.727 fused_ordering(642) 00:30:53.727 fused_ordering(643) 00:30:53.727 fused_ordering(644) 00:30:53.727 fused_ordering(645) 00:30:53.727 fused_ordering(646) 00:30:53.727 fused_ordering(647) 00:30:53.727 fused_ordering(648) 00:30:53.727 fused_ordering(649) 00:30:53.727 fused_ordering(650) 00:30:53.727 fused_ordering(651) 00:30:53.727 fused_ordering(652) 00:30:53.727 fused_ordering(653) 00:30:53.727 fused_ordering(654) 00:30:53.727 fused_ordering(655) 00:30:53.727 fused_ordering(656) 00:30:53.727 fused_ordering(657) 00:30:53.727 fused_ordering(658) 00:30:53.727 fused_ordering(659) 00:30:53.728 fused_ordering(660) 00:30:53.728 fused_ordering(661) 00:30:53.728 fused_ordering(662) 00:30:53.728 fused_ordering(663) 00:30:53.728 fused_ordering(664) 00:30:53.728 fused_ordering(665) 00:30:53.728 fused_ordering(666) 00:30:53.728 fused_ordering(667) 00:30:53.728 fused_ordering(668) 00:30:53.728 fused_ordering(669) 00:30:53.728 fused_ordering(670) 00:30:53.728 fused_ordering(671) 00:30:53.728 fused_ordering(672) 00:30:53.728 fused_ordering(673) 00:30:53.728 fused_ordering(674) 00:30:53.728 fused_ordering(675) 00:30:53.728 fused_ordering(676) 00:30:53.728 fused_ordering(677) 00:30:53.728 fused_ordering(678) 00:30:53.728 fused_ordering(679) 00:30:53.728 fused_ordering(680) 00:30:53.728 fused_ordering(681) 00:30:53.728 fused_ordering(682) 00:30:53.728 fused_ordering(683) 00:30:53.728 fused_ordering(684) 00:30:53.728 fused_ordering(685) 00:30:53.728 fused_ordering(686) 00:30:53.728 fused_ordering(687) 00:30:53.728 fused_ordering(688) 00:30:53.728 fused_ordering(689) 00:30:53.728 fused_ordering(690) 00:30:53.728 fused_ordering(691) 00:30:53.728 fused_ordering(692) 00:30:53.728 fused_ordering(693) 00:30:53.728 fused_ordering(694) 00:30:53.728 fused_ordering(695) 00:30:53.728 fused_ordering(696) 00:30:53.728 fused_ordering(697) 00:30:53.728 fused_ordering(698) 00:30:53.728 fused_ordering(699) 00:30:53.728 fused_ordering(700) 00:30:53.728 fused_ordering(701) 00:30:53.728 fused_ordering(702) 00:30:53.728 fused_ordering(703) 00:30:53.728 fused_ordering(704) 00:30:53.728 fused_ordering(705) 00:30:53.728 fused_ordering(706) 00:30:53.728 fused_ordering(707) 00:30:53.728 fused_ordering(708) 00:30:53.728 fused_ordering(709) 00:30:53.728 fused_ordering(710) 00:30:53.728 fused_ordering(711) 00:30:53.728 fused_ordering(712) 00:30:53.728 fused_ordering(713) 00:30:53.728 fused_ordering(714) 00:30:53.728 fused_ordering(715) 00:30:53.728 fused_ordering(716) 00:30:53.728 fused_ordering(717) 00:30:53.728 fused_ordering(718) 00:30:53.728 fused_ordering(719) 00:30:53.728 fused_ordering(720) 00:30:53.728 fused_ordering(721) 00:30:53.728 fused_ordering(722) 00:30:53.728 fused_ordering(723) 00:30:53.728 fused_ordering(724) 00:30:53.728 fused_ordering(725) 00:30:53.728 fused_ordering(726) 00:30:53.728 fused_ordering(727) 00:30:53.728 fused_ordering(728) 00:30:53.728 fused_ordering(729) 00:30:53.728 fused_ordering(730) 00:30:53.728 fused_ordering(731) 00:30:53.728 fused_ordering(732) 00:30:53.728 fused_ordering(733) 00:30:53.728 fused_ordering(734) 00:30:53.728 fused_ordering(735) 00:30:53.728 fused_ordering(736) 00:30:53.728 fused_ordering(737) 00:30:53.728 fused_ordering(738) 00:30:53.728 fused_ordering(739) 00:30:53.728 fused_ordering(740) 00:30:53.728 fused_ordering(741) 00:30:53.728 fused_ordering(742) 00:30:53.728 fused_ordering(743) 00:30:53.728 fused_ordering(744) 00:30:53.728 fused_ordering(745) 00:30:53.728 fused_ordering(746) 00:30:53.728 fused_ordering(747) 00:30:53.728 fused_ordering(748) 00:30:53.728 fused_ordering(749) 00:30:53.728 fused_ordering(750) 00:30:53.728 fused_ordering(751) 00:30:53.728 fused_ordering(752) 00:30:53.728 fused_ordering(753) 00:30:53.728 fused_ordering(754) 00:30:53.728 fused_ordering(755) 00:30:53.728 fused_ordering(756) 00:30:53.728 fused_ordering(757) 00:30:53.728 fused_ordering(758) 00:30:53.728 fused_ordering(759) 00:30:53.728 fused_ordering(760) 00:30:53.728 fused_ordering(761) 00:30:53.728 fused_ordering(762) 00:30:53.728 fused_ordering(763) 00:30:53.728 fused_ordering(764) 00:30:53.728 fused_ordering(765) 00:30:53.728 fused_ordering(766) 00:30:53.728 fused_ordering(767) 00:30:53.728 fused_ordering(768) 00:30:53.728 fused_ordering(769) 00:30:53.728 fused_ordering(770) 00:30:53.728 fused_ordering(771) 00:30:53.728 fused_ordering(772) 00:30:53.728 fused_ordering(773) 00:30:53.728 fused_ordering(774) 00:30:53.728 fused_ordering(775) 00:30:53.728 fused_ordering(776) 00:30:53.728 fused_ordering(777) 00:30:53.728 fused_ordering(778) 00:30:53.728 fused_ordering(779) 00:30:53.728 fused_ordering(780) 00:30:53.728 fused_ordering(781) 00:30:53.728 fused_ordering(782) 00:30:53.728 fused_ordering(783) 00:30:53.728 fused_ordering(784) 00:30:53.728 fused_ordering(785) 00:30:53.728 fused_ordering(786) 00:30:53.728 fused_ordering(787) 00:30:53.728 fused_ordering(788) 00:30:53.728 fused_ordering(789) 00:30:53.728 fused_ordering(790) 00:30:53.728 fused_ordering(791) 00:30:53.728 fused_ordering(792) 00:30:53.728 fused_ordering(793) 00:30:53.728 fused_ordering(794) 00:30:53.728 fused_ordering(795) 00:30:53.728 fused_ordering(796) 00:30:53.728 fused_ordering(797) 00:30:53.728 fused_ordering(798) 00:30:53.728 fused_ordering(799) 00:30:53.728 fused_ordering(800) 00:30:53.728 fused_ordering(801) 00:30:53.728 fused_ordering(802) 00:30:53.728 fused_ordering(803) 00:30:53.728 fused_ordering(804) 00:30:53.728 fused_ordering(805) 00:30:53.728 fused_ordering(806) 00:30:53.728 fused_ordering(807) 00:30:53.728 fused_ordering(808) 00:30:53.728 fused_ordering(809) 00:30:53.728 fused_ordering(810) 00:30:53.728 fused_ordering(811) 00:30:53.728 fused_ordering(812) 00:30:53.728 fused_ordering(813) 00:30:53.728 fused_ordering(814) 00:30:53.728 fused_ordering(815) 00:30:53.728 fused_ordering(816) 00:30:53.728 fused_ordering(817) 00:30:53.728 fused_ordering(818) 00:30:53.728 fused_ordering(819) 00:30:53.728 fused_ordering(820) 00:30:54.297 fused_ordering(821) 00:30:54.297 fused_ordering(822) 00:30:54.297 fused_ordering(823) 00:30:54.297 fused_ordering(824) 00:30:54.297 fused_ordering(825) 00:30:54.297 fused_ordering(826) 00:30:54.297 fused_ordering(827) 00:30:54.297 fused_ordering(828) 00:30:54.297 fused_ordering(829) 00:30:54.297 fused_ordering(830) 00:30:54.297 fused_ordering(831) 00:30:54.297 fused_ordering(832) 00:30:54.297 fused_ordering(833) 00:30:54.297 fused_ordering(834) 00:30:54.297 fused_ordering(835) 00:30:54.297 fused_ordering(836) 00:30:54.297 fused_ordering(837) 00:30:54.297 fused_ordering(838) 00:30:54.297 fused_ordering(839) 00:30:54.297 fused_ordering(840) 00:30:54.297 fused_ordering(841) 00:30:54.297 fused_ordering(842) 00:30:54.297 fused_ordering(843) 00:30:54.297 fused_ordering(844) 00:30:54.297 fused_ordering(845) 00:30:54.297 fused_ordering(846) 00:30:54.297 fused_ordering(847) 00:30:54.297 fused_ordering(848) 00:30:54.297 fused_ordering(849) 00:30:54.297 fused_ordering(850) 00:30:54.297 fused_ordering(851) 00:30:54.297 fused_ordering(852) 00:30:54.297 fused_ordering(853) 00:30:54.297 fused_ordering(854) 00:30:54.297 fused_ordering(855) 00:30:54.297 fused_ordering(856) 00:30:54.297 fused_ordering(857) 00:30:54.297 fused_ordering(858) 00:30:54.297 fused_ordering(859) 00:30:54.297 fused_ordering(860) 00:30:54.297 fused_ordering(861) 00:30:54.297 fused_ordering(862) 00:30:54.297 fused_ordering(863) 00:30:54.297 fused_ordering(864) 00:30:54.297 fused_ordering(865) 00:30:54.297 fused_ordering(866) 00:30:54.297 fused_ordering(867) 00:30:54.297 fused_ordering(868) 00:30:54.297 fused_ordering(869) 00:30:54.297 fused_ordering(870) 00:30:54.297 fused_ordering(871) 00:30:54.297 fused_ordering(872) 00:30:54.297 fused_ordering(873) 00:30:54.297 fused_ordering(874) 00:30:54.297 fused_ordering(875) 00:30:54.297 fused_ordering(876) 00:30:54.297 fused_ordering(877) 00:30:54.297 fused_ordering(878) 00:30:54.297 fused_ordering(879) 00:30:54.297 fused_ordering(880) 00:30:54.297 fused_ordering(881) 00:30:54.297 fused_ordering(882) 00:30:54.297 fused_ordering(883) 00:30:54.297 fused_ordering(884) 00:30:54.297 fused_ordering(885) 00:30:54.297 fused_ordering(886) 00:30:54.297 fused_ordering(887) 00:30:54.297 fused_ordering(888) 00:30:54.297 fused_ordering(889) 00:30:54.297 fused_ordering(890) 00:30:54.297 fused_ordering(891) 00:30:54.297 fused_ordering(892) 00:30:54.297 fused_ordering(893) 00:30:54.297 fused_ordering(894) 00:30:54.297 fused_ordering(895) 00:30:54.297 fused_ordering(896) 00:30:54.297 fused_ordering(897) 00:30:54.297 fused_ordering(898) 00:30:54.297 fused_ordering(899) 00:30:54.297 fused_ordering(900) 00:30:54.297 fused_ordering(901) 00:30:54.297 fused_ordering(902) 00:30:54.297 fused_ordering(903) 00:30:54.297 fused_ordering(904) 00:30:54.297 fused_ordering(905) 00:30:54.297 fused_ordering(906) 00:30:54.297 fused_ordering(907) 00:30:54.297 fused_ordering(908) 00:30:54.297 fused_ordering(909) 00:30:54.297 fused_ordering(910) 00:30:54.297 fused_ordering(911) 00:30:54.297 fused_ordering(912) 00:30:54.297 fused_ordering(913) 00:30:54.297 fused_ordering(914) 00:30:54.297 fused_ordering(915) 00:30:54.297 fused_ordering(916) 00:30:54.297 fused_ordering(917) 00:30:54.297 fused_ordering(918) 00:30:54.297 fused_ordering(919) 00:30:54.297 fused_ordering(920) 00:30:54.297 fused_ordering(921) 00:30:54.297 fused_ordering(922) 00:30:54.297 fused_ordering(923) 00:30:54.297 fused_ordering(924) 00:30:54.297 fused_ordering(925) 00:30:54.297 fused_ordering(926) 00:30:54.297 fused_ordering(927) 00:30:54.297 fused_ordering(928) 00:30:54.297 fused_ordering(929) 00:30:54.297 fused_ordering(930) 00:30:54.297 fused_ordering(931) 00:30:54.297 fused_ordering(932) 00:30:54.297 fused_ordering(933) 00:30:54.297 fused_ordering(934) 00:30:54.297 fused_ordering(935) 00:30:54.297 fused_ordering(936) 00:30:54.297 fused_ordering(937) 00:30:54.297 fused_ordering(938) 00:30:54.297 fused_ordering(939) 00:30:54.297 fused_ordering(940) 00:30:54.297 fused_ordering(941) 00:30:54.297 fused_ordering(942) 00:30:54.297 fused_ordering(943) 00:30:54.297 fused_ordering(944) 00:30:54.297 fused_ordering(945) 00:30:54.297 fused_ordering(946) 00:30:54.297 fused_ordering(947) 00:30:54.297 fused_ordering(948) 00:30:54.297 fused_ordering(949) 00:30:54.297 fused_ordering(950) 00:30:54.297 fused_ordering(951) 00:30:54.297 fused_ordering(952) 00:30:54.297 fused_ordering(953) 00:30:54.297 fused_ordering(954) 00:30:54.297 fused_ordering(955) 00:30:54.297 fused_ordering(956) 00:30:54.297 fused_ordering(957) 00:30:54.297 fused_ordering(958) 00:30:54.297 fused_ordering(959) 00:30:54.297 fused_ordering(960) 00:30:54.297 fused_ordering(961) 00:30:54.297 fused_ordering(962) 00:30:54.297 fused_ordering(963) 00:30:54.297 fused_ordering(964) 00:30:54.297 fused_ordering(965) 00:30:54.297 fused_ordering(966) 00:30:54.297 fused_ordering(967) 00:30:54.297 fused_ordering(968) 00:30:54.297 fused_ordering(969) 00:30:54.297 fused_ordering(970) 00:30:54.297 fused_ordering(971) 00:30:54.297 fused_ordering(972) 00:30:54.297 fused_ordering(973) 00:30:54.297 fused_ordering(974) 00:30:54.297 fused_ordering(975) 00:30:54.297 fused_ordering(976) 00:30:54.297 fused_ordering(977) 00:30:54.297 fused_ordering(978) 00:30:54.297 fused_ordering(979) 00:30:54.298 fused_ordering(980) 00:30:54.298 fused_ordering(981) 00:30:54.298 fused_ordering(982) 00:30:54.298 fused_ordering(983) 00:30:54.298 fused_ordering(984) 00:30:54.298 fused_ordering(985) 00:30:54.298 fused_ordering(986) 00:30:54.298 fused_ordering(987) 00:30:54.298 fused_ordering(988) 00:30:54.298 fused_ordering(989) 00:30:54.298 fused_ordering(990) 00:30:54.298 fused_ordering(991) 00:30:54.298 fused_ordering(992) 00:30:54.298 fused_ordering(993) 00:30:54.298 fused_ordering(994) 00:30:54.298 fused_ordering(995) 00:30:54.298 fused_ordering(996) 00:30:54.298 fused_ordering(997) 00:30:54.298 fused_ordering(998) 00:30:54.298 fused_ordering(999) 00:30:54.298 fused_ordering(1000) 00:30:54.298 fused_ordering(1001) 00:30:54.298 fused_ordering(1002) 00:30:54.298 fused_ordering(1003) 00:30:54.298 fused_ordering(1004) 00:30:54.298 fused_ordering(1005) 00:30:54.298 fused_ordering(1006) 00:30:54.298 fused_ordering(1007) 00:30:54.298 fused_ordering(1008) 00:30:54.298 fused_ordering(1009) 00:30:54.298 fused_ordering(1010) 00:30:54.298 fused_ordering(1011) 00:30:54.298 fused_ordering(1012) 00:30:54.298 fused_ordering(1013) 00:30:54.298 fused_ordering(1014) 00:30:54.298 fused_ordering(1015) 00:30:54.298 fused_ordering(1016) 00:30:54.298 fused_ordering(1017) 00:30:54.298 fused_ordering(1018) 00:30:54.298 fused_ordering(1019) 00:30:54.298 fused_ordering(1020) 00:30:54.298 fused_ordering(1021) 00:30:54.298 fused_ordering(1022) 00:30:54.298 fused_ordering(1023) 00:30:54.298 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:30:54.298 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:30:54.298 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:54.298 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:30:54.555 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:54.555 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:30:54.555 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:54.555 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:54.555 rmmod nvme_tcp 00:30:54.555 rmmod nvme_fabrics 00:30:54.555 rmmod nvme_keyring 00:30:54.555 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:54.555 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:30:54.555 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:30:54.555 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 75642 ']' 00:30:54.556 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 75642 00:30:54.556 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 75642 ']' 00:30:54.556 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 75642 00:30:54.556 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:30:54.556 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:54.556 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75642 00:30:54.556 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:54.556 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:54.556 killing process with pid 75642 00:30:54.556 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75642' 00:30:54.556 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 75642 00:30:54.556 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 75642 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:54.814 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:55.072 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:55.072 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:55.072 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.072 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.072 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.072 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:30:55.072 00:30:55.072 real 0m3.952s 00:30:55.072 user 0m4.299s 00:30:55.072 sys 0m1.640s 00:30:55.072 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:55.072 ************************************ 00:30:55.072 END TEST nvmf_fused_ordering 00:30:55.072 ************************************ 00:30:55.072 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:30:55.072 05:42:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:30:55.072 05:42:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:55.072 05:42:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:55.072 05:42:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:55.072 ************************************ 00:30:55.073 START TEST nvmf_ns_masking 00:30:55.073 ************************************ 00:30:55.073 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:30:55.073 * Looking for test storage... 00:30:55.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:55.073 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:55.073 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:30:55.073 05:42:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:55.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.332 --rc genhtml_branch_coverage=1 00:30:55.332 --rc genhtml_function_coverage=1 00:30:55.332 --rc genhtml_legend=1 00:30:55.332 --rc geninfo_all_blocks=1 00:30:55.332 --rc geninfo_unexecuted_blocks=1 00:30:55.332 00:30:55.332 ' 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:55.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.332 --rc genhtml_branch_coverage=1 00:30:55.332 --rc genhtml_function_coverage=1 00:30:55.332 --rc genhtml_legend=1 00:30:55.332 --rc geninfo_all_blocks=1 00:30:55.332 --rc geninfo_unexecuted_blocks=1 00:30:55.332 00:30:55.332 ' 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:55.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.332 --rc genhtml_branch_coverage=1 00:30:55.332 --rc genhtml_function_coverage=1 00:30:55.332 --rc genhtml_legend=1 00:30:55.332 --rc geninfo_all_blocks=1 00:30:55.332 --rc geninfo_unexecuted_blocks=1 00:30:55.332 00:30:55.332 ' 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:55.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:55.332 --rc genhtml_branch_coverage=1 00:30:55.332 --rc genhtml_function_coverage=1 00:30:55.332 --rc genhtml_legend=1 00:30:55.332 --rc geninfo_all_blocks=1 00:30:55.332 --rc geninfo_unexecuted_blocks=1 00:30:55.332 00:30:55.332 ' 00:30:55.332 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:55.333 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=ed7ce590-f09b-4c05-a053-0ed5ac83c699 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=1b1a0299-9650-4dc3-949f-6491ac08f889 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ab1e9afd-3bc9-4de9-bd2a-23a77ffc9505 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:55.333 Cannot find device "nvmf_init_br" 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:55.333 Cannot find device "nvmf_init_br2" 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:55.333 Cannot find device "nvmf_tgt_br" 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:55.333 Cannot find device "nvmf_tgt_br2" 00:30:55.333 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:30:55.334 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:55.334 Cannot find device "nvmf_init_br" 00:30:55.334 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:30:55.334 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:55.334 Cannot find device "nvmf_init_br2" 00:30:55.334 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:30:55.334 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:55.334 Cannot find device "nvmf_tgt_br" 00:30:55.334 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:30:55.334 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:55.592 Cannot find device "nvmf_tgt_br2" 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:55.592 Cannot find device "nvmf_br" 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:55.592 Cannot find device "nvmf_init_if" 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:55.592 Cannot find device "nvmf_init_if2" 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:55.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:55.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:55.592 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:55.852 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:55.852 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:30:55.852 00:30:55.852 --- 10.0.0.3 ping statistics --- 00:30:55.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.852 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:55.852 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:55.852 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:30:55.852 00:30:55.852 --- 10.0.0.4 ping statistics --- 00:30:55.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.852 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:55.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:55.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:30:55.852 00:30:55.852 --- 10.0.0.1 ping statistics --- 00:30:55.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.852 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:55.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:55.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:30:55.852 00:30:55.852 --- 10.0.0.2 ping statistics --- 00:30:55.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.852 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=75924 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 75924 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 75924 ']' 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:55.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:55.852 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:30:55.852 [2024-11-20 05:42:15.721487] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:30:55.852 [2024-11-20 05:42:15.721584] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.110 [2024-11-20 05:42:15.879811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.110 [2024-11-20 05:42:15.972694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.110 [2024-11-20 05:42:15.973018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.110 [2024-11-20 05:42:15.973425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.110 [2024-11-20 05:42:15.973796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.110 [2024-11-20 05:42:15.973913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.110 [2024-11-20 05:42:15.974589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.044 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:57.044 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:30:57.044 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:57.044 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:57.044 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:30:57.044 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.044 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:57.304 [2024-11-20 05:42:17.187738] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.304 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:30:57.304 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:30:57.304 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:57.870 Malloc1 00:30:57.870 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:30:58.129 Malloc2 00:30:58.129 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:58.387 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:30:58.955 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:58.955 [2024-11-20 05:42:18.833570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:58.955 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:30:58.955 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ab1e9afd-3bc9-4de9-bd2a-23a77ffc9505 -a 10.0.0.3 -s 4420 -i 4 00:30:59.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:30:59.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:30:59.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:59.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:30:59.213 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:31:01.190 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:31:01.190 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:31:01.190 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:31:01.190 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:31:01.190 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:31:01.190 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:31:01.190 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:31:01.190 05:42:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:31:01.190 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:31:01.190 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:31:01.190 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:31:01.190 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:31:01.190 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:31:01.190 [ 0]:0x1 00:31:01.190 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:31:01.190 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:31:01.190 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5d650a9c2ecd412c8144e0dc8ee93e0d 00:31:01.190 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5d650a9c2ecd412c8144e0dc8ee93e0d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:31:01.190 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:31:01.755 [ 0]:0x1 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5d650a9c2ecd412c8144e0dc8ee93e0d 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5d650a9c2ecd412c8144e0dc8ee93e0d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:31:01.755 [ 1]:0x2 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f81b0bd4de5c4f7085697b5a912a33ea 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f81b0bd4de5c4f7085697b5a912a33ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:01.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:01.755 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.321 05:42:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:31:02.578 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:31:02.579 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ab1e9afd-3bc9-4de9-bd2a-23a77ffc9505 -a 10.0.0.3 -s 4420 -i 4 00:31:02.579 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:31:02.579 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:31:02.579 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:31:02.579 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:31:02.579 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:31:02.579 05:42:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:31:05.109 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:31:05.110 [ 0]:0x2 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f81b0bd4de5c4f7085697b5a912a33ea 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f81b0bd4de5c4f7085697b5a912a33ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:31:05.110 [ 0]:0x1 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5d650a9c2ecd412c8144e0dc8ee93e0d 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5d650a9c2ecd412c8144e0dc8ee93e0d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:31:05.110 [ 1]:0x2 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:31:05.110 05:42:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:31:05.368 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f81b0bd4de5c4f7085697b5a912a33ea 00:31:05.368 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f81b0bd4de5c4f7085697b5a912a33ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:31:05.368 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:31:05.626 [ 0]:0x2 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f81b0bd4de5c4f7085697b5a912a33ea 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f81b0bd4de5c4f7085697b5a912a33ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:31:05.626 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:05.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:05.884 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:31:06.142 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:31:06.142 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ab1e9afd-3bc9-4de9-bd2a-23a77ffc9505 -a 10.0.0.3 -s 4420 -i 4 00:31:06.142 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:31:06.142 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:31:06.142 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:31:06.142 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:31:06.142 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:31:06.142 05:42:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:31:08.674 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:31:08.674 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:31:08.674 05:42:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:31:08.674 [ 0]:0x1 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5d650a9c2ecd412c8144e0dc8ee93e0d 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5d650a9c2ecd412c8144e0dc8ee93e0d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:31:08.674 [ 1]:0x2 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f81b0bd4de5c4f7085697b5a912a33ea 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f81b0bd4de5c4f7085697b5a912a33ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:31:08.674 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:31:08.944 [ 0]:0x2 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f81b0bd4de5c4f7085697b5a912a33ea 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f81b0bd4de5c4f7085697b5a912a33ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:08.944 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:31:09.209 [2024-11-20 05:42:28.954188] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:31:09.209 2024/11/20 05:42:28 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:31:09.209 request: 00:31:09.209 { 00:31:09.209 "method": "nvmf_ns_remove_host", 00:31:09.209 "params": { 00:31:09.209 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:09.209 "nsid": 2, 00:31:09.209 "host": "nqn.2016-06.io.spdk:host1" 00:31:09.209 } 00:31:09.209 } 00:31:09.209 Got JSON-RPC error response 00:31:09.209 GoRPCClient: error on JSON-RPC call 00:31:09.209 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:31:09.209 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:09.209 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:09.209 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:09.209 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:31:09.209 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:31:09.209 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:31:09.209 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:31:09.209 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:09.209 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:31:09.209 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:09.209 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:31:09.210 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:31:09.210 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:31:09.210 [ 0]:0x2 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f81b0bd4de5c4f7085697b5a912a33ea 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f81b0bd4de5c4f7085697b5a912a33ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:31:09.210 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:09.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:09.467 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=76314 00:31:09.467 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.467 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 76314 /var/tmp/host.sock 00:31:09.467 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:31:09.467 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 76314 ']' 00:31:09.467 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:31:09.467 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:09.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:31:09.467 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:31:09.467 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:09.467 05:42:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:31:09.467 [2024-11-20 05:42:29.211018] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:31:09.468 [2024-11-20 05:42:29.211116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76314 ] 00:31:09.468 [2024-11-20 05:42:29.363643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.725 [2024-11-20 05:42:29.457259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.660 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:10.660 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:31:10.660 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.660 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:10.918 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid ed7ce590-f09b-4c05-a053-0ed5ac83c699 00:31:10.918 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:31:10.918 05:42:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g ED7CE590F09B4C05A0530ED5AC83C699 -i 00:31:11.175 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 1b1a0299-9650-4dc3-949f-6491ac08f889 00:31:11.175 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:31:11.175 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 1B1A029996504DC3949F6491AC08F889 -i 00:31:11.740 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:31:11.740 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:31:11.997 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:31:11.997 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:31:12.561 nvme0n1 00:31:12.561 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:31:12.561 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:31:12.818 nvme1n2 00:31:12.818 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:31:12.818 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:31:12.818 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:31:12.818 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:31:12.818 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:31:13.076 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:31:13.076 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:31:13.076 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:31:13.076 05:42:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:31:13.333 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ ed7ce590-f09b-4c05-a053-0ed5ac83c699 == \e\d\7\c\e\5\9\0\-\f\0\9\b\-\4\c\0\5\-\a\0\5\3\-\0\e\d\5\a\c\8\3\c\6\9\9 ]] 00:31:13.333 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:31:13.333 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:31:13.333 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:31:13.333 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 1b1a0299-9650-4dc3-949f-6491ac08f889 == \1\b\1\a\0\2\9\9\-\9\6\5\0\-\4\d\c\3\-\9\4\9\f\-\6\4\9\1\a\c\0\8\f\8\8\9 ]] 00:31:13.333 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.591 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:13.849 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid ed7ce590-f09b-4c05-a053-0ed5ac83c699 00:31:13.849 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:31:13.849 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g ED7CE590F09B4C05A0530ED5AC83C699 00:31:13.849 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:31:13.849 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g ED7CE590F09B4C05A0530ED5AC83C699 00:31:13.849 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:13.849 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:13.849 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:13.849 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:13.849 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:13.849 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:13.849 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:13.849 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:13.849 05:42:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g ED7CE590F09B4C05A0530ED5AC83C699 00:31:14.165 [2024-11-20 05:42:33.984159] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:31:14.165 [2024-11-20 05:42:33.984216] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:31:14.165 [2024-11-20 05:42:33.984229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:14.165 2024/11/20 05:42:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid nguid:ED7CE590F09B4C05A0530ED5AC83C699 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:31:14.165 request: 00:31:14.165 { 00:31:14.165 "method": "nvmf_subsystem_add_ns", 00:31:14.165 "params": { 00:31:14.165 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:14.165 "namespace": { 00:31:14.165 "bdev_name": "invalid", 00:31:14.165 "nsid": 1, 00:31:14.165 "nguid": "ED7CE590F09B4C05A0530ED5AC83C699", 00:31:14.165 "no_auto_visible": false 00:31:14.165 } 00:31:14.165 } 00:31:14.165 } 00:31:14.165 Got JSON-RPC error response 00:31:14.165 GoRPCClient: error on JSON-RPC call 00:31:14.165 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:31:14.165 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:14.165 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:14.165 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:14.165 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid ed7ce590-f09b-4c05-a053-0ed5ac83c699 00:31:14.165 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:31:14.165 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g ED7CE590F09B4C05A0530ED5AC83C699 -i 00:31:14.732 05:42:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:31:16.634 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:31:16.634 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:31:16.634 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:31:16.892 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:31:16.892 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 76314 00:31:16.892 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 76314 ']' 00:31:16.892 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 76314 00:31:16.892 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:31:16.892 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:16.892 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76314 00:31:16.892 killing process with pid 76314 00:31:16.892 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:16.892 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:16.892 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76314' 00:31:16.892 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 76314 00:31:16.892 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 76314 00:31:17.151 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:17.410 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:31:17.410 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:31:17.410 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:17.410 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:31:17.410 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:17.410 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:31:17.410 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:17.410 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:17.410 rmmod nvme_tcp 00:31:17.669 rmmod nvme_fabrics 00:31:17.669 rmmod nvme_keyring 00:31:17.669 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:17.669 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:31:17.669 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:31:17.669 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 75924 ']' 00:31:17.669 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 75924 00:31:17.669 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 75924 ']' 00:31:17.669 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 75924 00:31:17.669 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:31:17.669 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:17.669 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75924 00:31:17.669 killing process with pid 75924 00:31:17.669 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:17.669 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:17.669 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75924' 00:31:17.669 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 75924 00:31:17.669 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 75924 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:17.927 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:18.186 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:18.186 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.186 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.186 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.186 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:31:18.186 00:31:18.186 real 0m23.034s 00:31:18.186 user 0m37.182s 00:31:18.186 sys 0m5.005s 00:31:18.186 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:18.186 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:31:18.186 ************************************ 00:31:18.186 END TEST nvmf_ns_masking 00:31:18.186 ************************************ 00:31:18.186 05:42:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:31:18.186 05:42:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:31:18.186 05:42:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:31:18.186 05:42:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:18.186 05:42:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:18.186 05:42:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:18.186 ************************************ 00:31:18.186 START TEST nvmf_auth_target 00:31:18.186 ************************************ 00:31:18.186 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:31:18.186 * Looking for test storage... 00:31:18.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:18.186 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:18.186 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:31:18.186 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:18.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.446 --rc genhtml_branch_coverage=1 00:31:18.446 --rc genhtml_function_coverage=1 00:31:18.446 --rc genhtml_legend=1 00:31:18.446 --rc geninfo_all_blocks=1 00:31:18.446 --rc geninfo_unexecuted_blocks=1 00:31:18.446 00:31:18.446 ' 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:18.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.446 --rc genhtml_branch_coverage=1 00:31:18.446 --rc genhtml_function_coverage=1 00:31:18.446 --rc genhtml_legend=1 00:31:18.446 --rc geninfo_all_blocks=1 00:31:18.446 --rc geninfo_unexecuted_blocks=1 00:31:18.446 00:31:18.446 ' 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:18.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.446 --rc genhtml_branch_coverage=1 00:31:18.446 --rc genhtml_function_coverage=1 00:31:18.446 --rc genhtml_legend=1 00:31:18.446 --rc geninfo_all_blocks=1 00:31:18.446 --rc geninfo_unexecuted_blocks=1 00:31:18.446 00:31:18.446 ' 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:18.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.446 --rc genhtml_branch_coverage=1 00:31:18.446 --rc genhtml_function_coverage=1 00:31:18.446 --rc genhtml_legend=1 00:31:18.446 --rc geninfo_all_blocks=1 00:31:18.446 --rc geninfo_unexecuted_blocks=1 00:31:18.446 00:31:18.446 ' 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.446 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:18.447 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:18.447 Cannot find device "nvmf_init_br" 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:18.447 Cannot find device "nvmf_init_br2" 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:18.447 Cannot find device "nvmf_tgt_br" 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:18.447 Cannot find device "nvmf_tgt_br2" 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:18.447 Cannot find device "nvmf_init_br" 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:18.447 Cannot find device "nvmf_init_br2" 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:18.447 Cannot find device "nvmf_tgt_br" 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:18.447 Cannot find device "nvmf_tgt_br2" 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:18.447 Cannot find device "nvmf_br" 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:31:18.447 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:18.706 Cannot find device "nvmf_init_if" 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:18.706 Cannot find device "nvmf_init_if2" 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:18.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:18.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:18.706 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:18.707 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:18.707 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:18.707 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:18.707 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:18.707 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:18.707 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:18.707 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:18.707 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:18.966 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:18.966 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:31:18.966 00:31:18.966 --- 10.0.0.3 ping statistics --- 00:31:18.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.966 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:18.966 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:18.966 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:31:18.966 00:31:18.966 --- 10.0.0.4 ping statistics --- 00:31:18.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.966 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:18.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:18.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:31:18.966 00:31:18.966 --- 10.0.0.1 ping statistics --- 00:31:18.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.966 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:18.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:18.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:31:18.966 00:31:18.966 --- 10.0.0.2 ping statistics --- 00:31:18.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:18.966 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=76816 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 76816 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 76816 ']' 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:18.966 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=76860 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a4287e59b0dfbb292e600014fe00e1120e41c8c81855a2a6 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3Cd 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a4287e59b0dfbb292e600014fe00e1120e41c8c81855a2a6 0 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a4287e59b0dfbb292e600014fe00e1120e41c8c81855a2a6 0 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a4287e59b0dfbb292e600014fe00e1120e41c8c81855a2a6 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:31:19.909 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:31:20.181 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3Cd 00:31:20.181 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3Cd 00:31:20.181 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.3Cd 00:31:20.181 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:31:20.181 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:31:20.181 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:20.181 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:31:20.181 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:31:20.181 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:31:20.181 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:20.181 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7b29860941d6dcb964aecdb57397b8bb834ba27e3c4e7fc21be82175a6f6c99a 00:31:20.181 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:20.181 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Tkh 00:31:20.181 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7b29860941d6dcb964aecdb57397b8bb834ba27e3c4e7fc21be82175a6f6c99a 3 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7b29860941d6dcb964aecdb57397b8bb834ba27e3c4e7fc21be82175a6f6c99a 3 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7b29860941d6dcb964aecdb57397b8bb834ba27e3c4e7fc21be82175a6f6c99a 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Tkh 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Tkh 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Tkh 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=911eb65c75bf1ae572d92f2893d44d5e 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Uz3 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 911eb65c75bf1ae572d92f2893d44d5e 1 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 911eb65c75bf1ae572d92f2893d44d5e 1 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=911eb65c75bf1ae572d92f2893d44d5e 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:31:20.182 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Uz3 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Uz3 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Uz3 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ecb0f2ba5f9297253e56836fdd942f7f56f56bc123be60d6 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.w4v 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ecb0f2ba5f9297253e56836fdd942f7f56f56bc123be60d6 2 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ecb0f2ba5f9297253e56836fdd942f7f56f56bc123be60d6 2 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ecb0f2ba5f9297253e56836fdd942f7f56f56bc123be60d6 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.w4v 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.w4v 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.w4v 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3ccf77f59a6846082639efbed498298b31e07bfb67eb164e 00:31:20.182 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.UsH 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3ccf77f59a6846082639efbed498298b31e07bfb67eb164e 2 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3ccf77f59a6846082639efbed498298b31e07bfb67eb164e 2 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3ccf77f59a6846082639efbed498298b31e07bfb67eb164e 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.UsH 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.UsH 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.UsH 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ff275cc2e76d75a33cd4a5a60b6af69d 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.uFb 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ff275cc2e76d75a33cd4a5a60b6af69d 1 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ff275cc2e76d75a33cd4a5a60b6af69d 1 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ff275cc2e76d75a33cd4a5a60b6af69d 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.uFb 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.uFb 00:31:20.441 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.uFb 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7db366845eb1004f13291f6438d298920be4ffb16c8aac16ec65257a04d8b300 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.8UX 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7db366845eb1004f13291f6438d298920be4ffb16c8aac16ec65257a04d8b300 3 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7db366845eb1004f13291f6438d298920be4ffb16c8aac16ec65257a04d8b300 3 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7db366845eb1004f13291f6438d298920be4ffb16c8aac16ec65257a04d8b300 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.8UX 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.8UX 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.8UX 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 76816 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 76816 ']' 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:20.442 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:21.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 76860 /var/tmp/host.sock 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 76860 ']' 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3Cd 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.009 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:21.267 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.267 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.3Cd 00:31:21.267 05:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.3Cd 00:31:21.525 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Tkh ]] 00:31:21.525 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Tkh 00:31:21.525 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.525 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:21.525 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.525 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Tkh 00:31:21.525 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Tkh 00:31:21.783 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:31:21.783 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Uz3 00:31:21.783 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.783 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:21.784 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.784 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Uz3 00:31:21.784 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Uz3 00:31:21.784 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.w4v ]] 00:31:21.784 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.w4v 00:31:21.784 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.784 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:21.784 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.784 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.w4v 00:31:21.784 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.w4v 00:31:22.042 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:31:22.042 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.UsH 00:31:22.042 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.042 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:22.042 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.042 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.UsH 00:31:22.042 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.UsH 00:31:22.300 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.uFb ]] 00:31:22.300 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uFb 00:31:22.300 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.300 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:22.300 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.300 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uFb 00:31:22.300 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uFb 00:31:22.558 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:31:22.558 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8UX 00:31:22.558 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.558 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:22.558 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.558 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.8UX 00:31:22.558 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.8UX 00:31:22.816 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:31:22.816 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:31:22.816 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:31:22.816 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:22.816 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:22.816 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:23.075 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:31:23.075 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:23.075 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:23.075 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:31:23.075 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:31:23.075 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:23.075 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:23.075 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.075 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:23.075 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.075 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:23.075 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:23.075 05:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:23.334 00:31:23.334 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:23.334 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:23.334 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:23.592 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.592 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:23.592 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.592 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:23.592 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.592 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:23.592 { 00:31:23.592 "auth": { 00:31:23.592 "dhgroup": "null", 00:31:23.592 "digest": "sha256", 00:31:23.592 "state": "completed" 00:31:23.592 }, 00:31:23.592 "cntlid": 1, 00:31:23.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:23.592 "listen_address": { 00:31:23.592 "adrfam": "IPv4", 00:31:23.592 "traddr": "10.0.0.3", 00:31:23.592 "trsvcid": "4420", 00:31:23.592 "trtype": "TCP" 00:31:23.592 }, 00:31:23.592 "peer_address": { 00:31:23.592 "adrfam": "IPv4", 00:31:23.592 "traddr": "10.0.0.1", 00:31:23.592 "trsvcid": "60392", 00:31:23.592 "trtype": "TCP" 00:31:23.592 }, 00:31:23.592 "qid": 0, 00:31:23.592 "state": "enabled", 00:31:23.592 "thread": "nvmf_tgt_poll_group_000" 00:31:23.592 } 00:31:23.592 ]' 00:31:23.592 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:23.850 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:23.850 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:23.850 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:31:23.850 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:23.850 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:23.850 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:23.850 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:24.109 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:31:24.109 05:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:28.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:28.297 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:28.297 00:31:28.297 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:28.297 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:28.297 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:28.556 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:28.556 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:28.556 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.556 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:28.556 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.556 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:28.556 { 00:31:28.556 "auth": { 00:31:28.556 "dhgroup": "null", 00:31:28.556 "digest": "sha256", 00:31:28.556 "state": "completed" 00:31:28.556 }, 00:31:28.556 "cntlid": 3, 00:31:28.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:28.556 "listen_address": { 00:31:28.556 "adrfam": "IPv4", 00:31:28.556 "traddr": "10.0.0.3", 00:31:28.556 "trsvcid": "4420", 00:31:28.556 "trtype": "TCP" 00:31:28.556 }, 00:31:28.556 "peer_address": { 00:31:28.556 "adrfam": "IPv4", 00:31:28.556 "traddr": "10.0.0.1", 00:31:28.556 "trsvcid": "48114", 00:31:28.556 "trtype": "TCP" 00:31:28.556 }, 00:31:28.556 "qid": 0, 00:31:28.556 "state": "enabled", 00:31:28.556 "thread": "nvmf_tgt_poll_group_000" 00:31:28.556 } 00:31:28.556 ]' 00:31:28.556 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:28.556 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:28.557 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:28.557 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:31:28.557 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:28.557 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:28.557 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:28.557 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:29.125 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:31:29.125 05:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:31:29.694 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:29.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:29.694 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:29.694 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.694 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:29.694 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.694 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:29.694 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:29.694 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:30.262 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:31:30.262 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:30.262 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:30.262 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:31:30.262 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:31:30.262 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:30.262 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:30.262 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.262 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.262 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.262 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:30.262 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:30.262 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:30.522 00:31:30.522 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:30.522 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:30.522 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:30.781 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:30.781 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:30.781 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.781 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.781 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.781 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:30.781 { 00:31:30.781 "auth": { 00:31:30.781 "dhgroup": "null", 00:31:30.781 "digest": "sha256", 00:31:30.781 "state": "completed" 00:31:30.781 }, 00:31:30.781 "cntlid": 5, 00:31:30.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:30.781 "listen_address": { 00:31:30.781 "adrfam": "IPv4", 00:31:30.781 "traddr": "10.0.0.3", 00:31:30.781 "trsvcid": "4420", 00:31:30.781 "trtype": "TCP" 00:31:30.781 }, 00:31:30.781 "peer_address": { 00:31:30.781 "adrfam": "IPv4", 00:31:30.781 "traddr": "10.0.0.1", 00:31:30.781 "trsvcid": "48150", 00:31:30.781 "trtype": "TCP" 00:31:30.781 }, 00:31:30.781 "qid": 0, 00:31:30.781 "state": "enabled", 00:31:30.781 "thread": "nvmf_tgt_poll_group_000" 00:31:30.781 } 00:31:30.781 ]' 00:31:30.781 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:30.781 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:30.781 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:30.781 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:31:30.781 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:30.781 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:30.781 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:30.781 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:31.350 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:31:31.350 05:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:31:31.917 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:31.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:31.917 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:31.917 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.917 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:31.917 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.917 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:31.917 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:31.917 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:31:32.176 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:31:32.176 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:32.176 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:32.176 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:31:32.176 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:31:32.176 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:32.176 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:31:32.176 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.176 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.176 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.176 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:31:32.176 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:32.176 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:32.435 00:31:32.436 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:32.436 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:32.436 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:32.694 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.694 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:32.694 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.694 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:32.694 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.694 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:32.694 { 00:31:32.694 "auth": { 00:31:32.694 "dhgroup": "null", 00:31:32.694 "digest": "sha256", 00:31:32.695 "state": "completed" 00:31:32.695 }, 00:31:32.695 "cntlid": 7, 00:31:32.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:32.695 "listen_address": { 00:31:32.695 "adrfam": "IPv4", 00:31:32.695 "traddr": "10.0.0.3", 00:31:32.695 "trsvcid": "4420", 00:31:32.695 "trtype": "TCP" 00:31:32.695 }, 00:31:32.695 "peer_address": { 00:31:32.695 "adrfam": "IPv4", 00:31:32.695 "traddr": "10.0.0.1", 00:31:32.695 "trsvcid": "48172", 00:31:32.695 "trtype": "TCP" 00:31:32.695 }, 00:31:32.695 "qid": 0, 00:31:32.695 "state": "enabled", 00:31:32.695 "thread": "nvmf_tgt_poll_group_000" 00:31:32.695 } 00:31:32.695 ]' 00:31:32.695 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:32.695 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:32.695 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:32.695 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:31:32.695 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:32.954 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:32.954 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:32.954 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:32.954 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:31:32.954 05:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:31:33.522 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:33.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:33.522 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:33.522 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.522 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:33.522 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.522 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:31:33.522 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:33.522 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:33.522 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:33.780 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:31:33.780 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:33.780 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:33.780 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:31:33.780 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:31:33.780 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:33.781 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:33.781 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.781 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:33.781 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.781 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:33.781 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:33.781 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:34.039 00:31:34.039 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:34.039 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:34.039 05:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:34.648 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.648 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:34.648 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.648 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:34.648 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.648 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:34.648 { 00:31:34.648 "auth": { 00:31:34.648 "dhgroup": "ffdhe2048", 00:31:34.648 "digest": "sha256", 00:31:34.648 "state": "completed" 00:31:34.648 }, 00:31:34.648 "cntlid": 9, 00:31:34.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:34.648 "listen_address": { 00:31:34.648 "adrfam": "IPv4", 00:31:34.648 "traddr": "10.0.0.3", 00:31:34.648 "trsvcid": "4420", 00:31:34.648 "trtype": "TCP" 00:31:34.648 }, 00:31:34.648 "peer_address": { 00:31:34.648 "adrfam": "IPv4", 00:31:34.648 "traddr": "10.0.0.1", 00:31:34.648 "trsvcid": "48196", 00:31:34.648 "trtype": "TCP" 00:31:34.648 }, 00:31:34.648 "qid": 0, 00:31:34.648 "state": "enabled", 00:31:34.648 "thread": "nvmf_tgt_poll_group_000" 00:31:34.648 } 00:31:34.648 ]' 00:31:34.648 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:34.648 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:34.648 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:34.648 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:31:34.648 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:34.648 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:34.648 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:34.648 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:34.906 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:31:34.906 05:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:31:35.473 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:35.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:35.473 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:35.473 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.473 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:35.473 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.473 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:35.473 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:35.473 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:36.040 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:31:36.040 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:36.040 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:36.040 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:31:36.040 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:31:36.040 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:36.040 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:36.040 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.040 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:36.040 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.040 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:36.040 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:36.040 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:36.298 00:31:36.298 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:36.298 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:36.298 05:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:36.556 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.556 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:36.556 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.556 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:36.556 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.556 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:36.556 { 00:31:36.556 "auth": { 00:31:36.556 "dhgroup": "ffdhe2048", 00:31:36.556 "digest": "sha256", 00:31:36.556 "state": "completed" 00:31:36.556 }, 00:31:36.556 "cntlid": 11, 00:31:36.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:36.556 "listen_address": { 00:31:36.556 "adrfam": "IPv4", 00:31:36.556 "traddr": "10.0.0.3", 00:31:36.556 "trsvcid": "4420", 00:31:36.556 "trtype": "TCP" 00:31:36.556 }, 00:31:36.556 "peer_address": { 00:31:36.556 "adrfam": "IPv4", 00:31:36.556 "traddr": "10.0.0.1", 00:31:36.556 "trsvcid": "48220", 00:31:36.556 "trtype": "TCP" 00:31:36.556 }, 00:31:36.556 "qid": 0, 00:31:36.556 "state": "enabled", 00:31:36.556 "thread": "nvmf_tgt_poll_group_000" 00:31:36.556 } 00:31:36.556 ]' 00:31:36.556 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:36.556 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:36.556 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:36.556 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:31:36.556 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:36.556 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:36.556 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:36.556 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:36.814 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:31:36.814 05:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:31:37.379 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:37.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:37.379 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:37.379 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.379 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:37.379 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.379 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:37.379 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:37.380 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:37.637 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:31:37.637 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:37.637 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:37.637 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:31:37.637 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:31:37.637 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:37.637 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:37.637 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.637 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:37.638 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.638 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:37.638 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:37.638 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:38.204 00:31:38.204 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:38.204 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:38.204 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:38.204 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.204 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:38.204 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.204 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:38.204 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.204 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:38.204 { 00:31:38.204 "auth": { 00:31:38.204 "dhgroup": "ffdhe2048", 00:31:38.204 "digest": "sha256", 00:31:38.204 "state": "completed" 00:31:38.204 }, 00:31:38.204 "cntlid": 13, 00:31:38.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:38.204 "listen_address": { 00:31:38.204 "adrfam": "IPv4", 00:31:38.204 "traddr": "10.0.0.3", 00:31:38.204 "trsvcid": "4420", 00:31:38.204 "trtype": "TCP" 00:31:38.204 }, 00:31:38.204 "peer_address": { 00:31:38.204 "adrfam": "IPv4", 00:31:38.204 "traddr": "10.0.0.1", 00:31:38.204 "trsvcid": "54380", 00:31:38.204 "trtype": "TCP" 00:31:38.204 }, 00:31:38.204 "qid": 0, 00:31:38.204 "state": "enabled", 00:31:38.204 "thread": "nvmf_tgt_poll_group_000" 00:31:38.204 } 00:31:38.204 ]' 00:31:38.204 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:38.204 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:38.204 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:38.464 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:31:38.464 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:38.464 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:38.464 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:38.464 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:38.723 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:31:38.723 05:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:31:39.291 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:39.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:39.291 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:39.291 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.291 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:39.291 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.291 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:39.291 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:39.291 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:39.549 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:31:39.549 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:39.549 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:39.549 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:31:39.549 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:31:39.549 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:39.549 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:31:39.549 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.549 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:39.549 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.549 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:31:39.549 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:39.549 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:39.808 00:31:39.808 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:39.808 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:39.808 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:40.066 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.066 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:40.066 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.066 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:40.325 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.325 05:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:40.325 { 00:31:40.325 "auth": { 00:31:40.325 "dhgroup": "ffdhe2048", 00:31:40.325 "digest": "sha256", 00:31:40.325 "state": "completed" 00:31:40.325 }, 00:31:40.325 "cntlid": 15, 00:31:40.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:40.325 "listen_address": { 00:31:40.325 "adrfam": "IPv4", 00:31:40.325 "traddr": "10.0.0.3", 00:31:40.325 "trsvcid": "4420", 00:31:40.325 "trtype": "TCP" 00:31:40.325 }, 00:31:40.325 "peer_address": { 00:31:40.325 "adrfam": "IPv4", 00:31:40.325 "traddr": "10.0.0.1", 00:31:40.325 "trsvcid": "54394", 00:31:40.325 "trtype": "TCP" 00:31:40.325 }, 00:31:40.325 "qid": 0, 00:31:40.325 "state": "enabled", 00:31:40.325 "thread": "nvmf_tgt_poll_group_000" 00:31:40.325 } 00:31:40.325 ]' 00:31:40.325 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:40.325 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:40.325 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:40.325 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:31:40.325 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:40.325 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:40.325 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:40.325 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:40.584 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:31:40.584 05:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:41.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.520 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:41.786 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.786 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:41.786 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:41.786 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:42.065 00:31:42.065 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:42.065 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:42.065 05:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:42.324 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.324 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:42.324 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.324 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:42.324 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.324 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:42.324 { 00:31:42.324 "auth": { 00:31:42.324 "dhgroup": "ffdhe3072", 00:31:42.324 "digest": "sha256", 00:31:42.324 "state": "completed" 00:31:42.324 }, 00:31:42.324 "cntlid": 17, 00:31:42.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:42.324 "listen_address": { 00:31:42.324 "adrfam": "IPv4", 00:31:42.324 "traddr": "10.0.0.3", 00:31:42.324 "trsvcid": "4420", 00:31:42.324 "trtype": "TCP" 00:31:42.324 }, 00:31:42.324 "peer_address": { 00:31:42.324 "adrfam": "IPv4", 00:31:42.324 "traddr": "10.0.0.1", 00:31:42.324 "trsvcid": "54424", 00:31:42.324 "trtype": "TCP" 00:31:42.324 }, 00:31:42.324 "qid": 0, 00:31:42.324 "state": "enabled", 00:31:42.324 "thread": "nvmf_tgt_poll_group_000" 00:31:42.324 } 00:31:42.324 ]' 00:31:42.324 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:42.324 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:42.324 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:42.324 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:42.324 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:42.583 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:42.583 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:42.583 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:42.841 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:31:42.841 05:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:31:43.408 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:43.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:43.408 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:43.408 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.408 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:43.409 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.409 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:43.409 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:43.409 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:43.667 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:31:43.667 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:43.667 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:43.667 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:31:43.667 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:31:43.667 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:43.667 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:43.667 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.667 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:43.667 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.667 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:43.667 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:43.667 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:43.926 00:31:43.926 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:43.926 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:43.926 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:44.185 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.185 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:44.185 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.185 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:44.185 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.185 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:44.185 { 00:31:44.185 "auth": { 00:31:44.185 "dhgroup": "ffdhe3072", 00:31:44.185 "digest": "sha256", 00:31:44.185 "state": "completed" 00:31:44.185 }, 00:31:44.185 "cntlid": 19, 00:31:44.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:44.185 "listen_address": { 00:31:44.185 "adrfam": "IPv4", 00:31:44.186 "traddr": "10.0.0.3", 00:31:44.186 "trsvcid": "4420", 00:31:44.186 "trtype": "TCP" 00:31:44.186 }, 00:31:44.186 "peer_address": { 00:31:44.186 "adrfam": "IPv4", 00:31:44.186 "traddr": "10.0.0.1", 00:31:44.186 "trsvcid": "54438", 00:31:44.186 "trtype": "TCP" 00:31:44.186 }, 00:31:44.186 "qid": 0, 00:31:44.186 "state": "enabled", 00:31:44.186 "thread": "nvmf_tgt_poll_group_000" 00:31:44.186 } 00:31:44.186 ]' 00:31:44.186 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:44.445 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:44.445 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:44.445 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:44.445 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:44.445 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:44.445 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:44.445 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:44.705 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:31:44.705 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:31:45.273 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:45.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:45.273 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:45.533 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.533 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:45.533 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.533 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:45.533 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:45.533 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:45.793 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:31:45.793 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:45.793 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:45.793 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:31:45.793 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:31:45.793 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:45.793 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:45.793 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.793 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:45.793 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.793 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:45.793 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:45.793 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:46.052 00:31:46.052 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:46.052 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:46.052 05:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:46.312 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.312 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:46.312 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.312 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:46.312 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.312 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:46.312 { 00:31:46.312 "auth": { 00:31:46.312 "dhgroup": "ffdhe3072", 00:31:46.312 "digest": "sha256", 00:31:46.312 "state": "completed" 00:31:46.312 }, 00:31:46.312 "cntlid": 21, 00:31:46.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:46.312 "listen_address": { 00:31:46.312 "adrfam": "IPv4", 00:31:46.312 "traddr": "10.0.0.3", 00:31:46.312 "trsvcid": "4420", 00:31:46.312 "trtype": "TCP" 00:31:46.312 }, 00:31:46.312 "peer_address": { 00:31:46.312 "adrfam": "IPv4", 00:31:46.312 "traddr": "10.0.0.1", 00:31:46.312 "trsvcid": "54484", 00:31:46.312 "trtype": "TCP" 00:31:46.312 }, 00:31:46.312 "qid": 0, 00:31:46.312 "state": "enabled", 00:31:46.312 "thread": "nvmf_tgt_poll_group_000" 00:31:46.312 } 00:31:46.312 ]' 00:31:46.312 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:46.572 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:46.572 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:46.572 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:46.572 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:46.572 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:46.572 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:46.572 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:46.831 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:31:46.831 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:47.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:47.768 05:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:48.336 00:31:48.336 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:48.336 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:48.336 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:48.595 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.595 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:48.595 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.595 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:48.595 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.595 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:48.595 { 00:31:48.595 "auth": { 00:31:48.595 "dhgroup": "ffdhe3072", 00:31:48.595 "digest": "sha256", 00:31:48.595 "state": "completed" 00:31:48.595 }, 00:31:48.595 "cntlid": 23, 00:31:48.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:48.595 "listen_address": { 00:31:48.595 "adrfam": "IPv4", 00:31:48.595 "traddr": "10.0.0.3", 00:31:48.595 "trsvcid": "4420", 00:31:48.595 "trtype": "TCP" 00:31:48.595 }, 00:31:48.595 "peer_address": { 00:31:48.595 "adrfam": "IPv4", 00:31:48.595 "traddr": "10.0.0.1", 00:31:48.595 "trsvcid": "41408", 00:31:48.595 "trtype": "TCP" 00:31:48.595 }, 00:31:48.595 "qid": 0, 00:31:48.595 "state": "enabled", 00:31:48.595 "thread": "nvmf_tgt_poll_group_000" 00:31:48.595 } 00:31:48.595 ]' 00:31:48.595 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:48.595 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:48.595 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:48.595 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:31:48.595 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:48.854 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:48.854 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:48.854 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:49.121 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:31:49.121 05:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:31:49.687 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:49.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:49.687 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:49.687 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.687 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:49.687 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.687 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:31:49.687 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:49.687 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:49.687 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:49.946 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:31:49.946 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:49.946 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:49.946 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:31:49.946 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:31:49.946 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:49.946 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:49.946 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.946 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:49.946 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.946 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:49.946 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:49.946 05:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:50.204 00:31:50.462 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:50.462 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:50.463 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:50.721 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.721 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:50.721 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:50.721 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:50.721 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:50.721 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:50.721 { 00:31:50.721 "auth": { 00:31:50.721 "dhgroup": "ffdhe4096", 00:31:50.721 "digest": "sha256", 00:31:50.721 "state": "completed" 00:31:50.721 }, 00:31:50.721 "cntlid": 25, 00:31:50.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:50.721 "listen_address": { 00:31:50.721 "adrfam": "IPv4", 00:31:50.721 "traddr": "10.0.0.3", 00:31:50.721 "trsvcid": "4420", 00:31:50.721 "trtype": "TCP" 00:31:50.721 }, 00:31:50.721 "peer_address": { 00:31:50.721 "adrfam": "IPv4", 00:31:50.721 "traddr": "10.0.0.1", 00:31:50.721 "trsvcid": "41424", 00:31:50.721 "trtype": "TCP" 00:31:50.721 }, 00:31:50.721 "qid": 0, 00:31:50.721 "state": "enabled", 00:31:50.721 "thread": "nvmf_tgt_poll_group_000" 00:31:50.721 } 00:31:50.721 ]' 00:31:50.721 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:50.721 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:50.721 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:50.721 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:50.721 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:50.721 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:50.721 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:50.721 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:50.979 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:31:50.979 05:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:51.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:51.915 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:52.483 00:31:52.483 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:52.483 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:52.483 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:52.743 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.743 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:52.743 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:52.743 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:52.743 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:52.743 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:52.743 { 00:31:52.743 "auth": { 00:31:52.743 "dhgroup": "ffdhe4096", 00:31:52.743 "digest": "sha256", 00:31:52.743 "state": "completed" 00:31:52.743 }, 00:31:52.743 "cntlid": 27, 00:31:52.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:52.743 "listen_address": { 00:31:52.743 "adrfam": "IPv4", 00:31:52.743 "traddr": "10.0.0.3", 00:31:52.743 "trsvcid": "4420", 00:31:52.743 "trtype": "TCP" 00:31:52.743 }, 00:31:52.743 "peer_address": { 00:31:52.743 "adrfam": "IPv4", 00:31:52.743 "traddr": "10.0.0.1", 00:31:52.743 "trsvcid": "41448", 00:31:52.743 "trtype": "TCP" 00:31:52.743 }, 00:31:52.743 "qid": 0, 00:31:52.743 "state": "enabled", 00:31:52.743 "thread": "nvmf_tgt_poll_group_000" 00:31:52.743 } 00:31:52.743 ]' 00:31:52.743 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:52.743 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:52.743 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:52.743 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:52.743 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:53.002 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:53.002 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:53.002 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:53.260 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:31:53.260 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:31:53.826 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:53.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:53.826 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:53.826 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.826 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:53.826 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.826 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:53.826 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:53.826 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:54.084 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:31:54.084 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:54.084 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:54.084 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:31:54.084 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:31:54.084 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:54.084 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:54.084 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.084 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:54.084 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.084 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:54.084 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:54.084 05:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:54.650 00:31:54.650 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:54.650 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:54.650 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:54.909 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.909 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:54.909 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.909 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:54.909 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.909 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:54.909 { 00:31:54.909 "auth": { 00:31:54.909 "dhgroup": "ffdhe4096", 00:31:54.909 "digest": "sha256", 00:31:54.909 "state": "completed" 00:31:54.909 }, 00:31:54.909 "cntlid": 29, 00:31:54.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:54.909 "listen_address": { 00:31:54.909 "adrfam": "IPv4", 00:31:54.909 "traddr": "10.0.0.3", 00:31:54.909 "trsvcid": "4420", 00:31:54.909 "trtype": "TCP" 00:31:54.909 }, 00:31:54.909 "peer_address": { 00:31:54.909 "adrfam": "IPv4", 00:31:54.909 "traddr": "10.0.0.1", 00:31:54.909 "trsvcid": "41478", 00:31:54.909 "trtype": "TCP" 00:31:54.909 }, 00:31:54.909 "qid": 0, 00:31:54.909 "state": "enabled", 00:31:54.909 "thread": "nvmf_tgt_poll_group_000" 00:31:54.909 } 00:31:54.909 ]' 00:31:54.909 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:54.909 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:54.909 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:54.909 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:54.909 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:54.909 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:54.909 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:54.909 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:55.484 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:31:55.484 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:31:56.049 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:56.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:56.049 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:56.049 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.049 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:56.049 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.049 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:56.049 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:56.049 05:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:56.614 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:31:56.614 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:56.614 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:56.614 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:31:56.614 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:31:56.614 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:56.614 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:31:56.614 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.614 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:56.614 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.614 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:31:56.614 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:56.614 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:31:56.871 00:31:56.871 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:56.871 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:56.871 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:57.129 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.129 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:57.129 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.129 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:57.129 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.129 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:57.129 { 00:31:57.129 "auth": { 00:31:57.129 "dhgroup": "ffdhe4096", 00:31:57.129 "digest": "sha256", 00:31:57.129 "state": "completed" 00:31:57.129 }, 00:31:57.129 "cntlid": 31, 00:31:57.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:57.129 "listen_address": { 00:31:57.129 "adrfam": "IPv4", 00:31:57.129 "traddr": "10.0.0.3", 00:31:57.129 "trsvcid": "4420", 00:31:57.129 "trtype": "TCP" 00:31:57.129 }, 00:31:57.129 "peer_address": { 00:31:57.129 "adrfam": "IPv4", 00:31:57.129 "traddr": "10.0.0.1", 00:31:57.129 "trsvcid": "41506", 00:31:57.129 "trtype": "TCP" 00:31:57.129 }, 00:31:57.129 "qid": 0, 00:31:57.129 "state": "enabled", 00:31:57.129 "thread": "nvmf_tgt_poll_group_000" 00:31:57.129 } 00:31:57.129 ]' 00:31:57.129 05:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:57.129 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:57.129 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:57.387 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:31:57.387 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:57.388 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:57.388 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:57.388 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:57.646 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:31:57.646 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:31:58.212 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:31:58.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:31:58.212 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:31:58.212 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.212 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:58.212 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.212 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:31:58.212 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:31:58.212 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:58.212 05:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:58.471 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:31:58.471 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:31:58.471 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:31:58.471 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:31:58.471 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:31:58.471 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:31:58.471 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:58.471 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.471 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:58.471 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.471 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:58.472 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:58.472 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:58.730 00:31:58.730 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:31:58.730 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:31:58.730 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:31:59.297 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.297 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:31:59.297 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.297 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.297 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.297 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:31:59.297 { 00:31:59.297 "auth": { 00:31:59.297 "dhgroup": "ffdhe6144", 00:31:59.297 "digest": "sha256", 00:31:59.297 "state": "completed" 00:31:59.297 }, 00:31:59.297 "cntlid": 33, 00:31:59.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:31:59.297 "listen_address": { 00:31:59.297 "adrfam": "IPv4", 00:31:59.297 "traddr": "10.0.0.3", 00:31:59.297 "trsvcid": "4420", 00:31:59.297 "trtype": "TCP" 00:31:59.297 }, 00:31:59.297 "peer_address": { 00:31:59.297 "adrfam": "IPv4", 00:31:59.297 "traddr": "10.0.0.1", 00:31:59.297 "trsvcid": "60596", 00:31:59.297 "trtype": "TCP" 00:31:59.297 }, 00:31:59.297 "qid": 0, 00:31:59.297 "state": "enabled", 00:31:59.297 "thread": "nvmf_tgt_poll_group_000" 00:31:59.297 } 00:31:59.297 ]' 00:31:59.297 05:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:31:59.297 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:31:59.297 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:31:59.297 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:31:59.297 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:31:59.297 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:31:59.297 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:31:59.297 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:31:59.555 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:31:59.555 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:32:00.122 05:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:00.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:00.122 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:00.122 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.122 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.122 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.122 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:00.122 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:00.122 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:00.688 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:32:00.688 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:00.688 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:32:00.688 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:32:00.688 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:32:00.688 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:00.688 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:00.688 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.688 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:00.688 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.688 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:00.688 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:00.688 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:00.962 00:32:00.962 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:00.962 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:00.962 05:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:01.249 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.249 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:01.249 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.249 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:01.249 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.249 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:01.249 { 00:32:01.249 "auth": { 00:32:01.249 "dhgroup": "ffdhe6144", 00:32:01.249 "digest": "sha256", 00:32:01.249 "state": "completed" 00:32:01.249 }, 00:32:01.249 "cntlid": 35, 00:32:01.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:01.249 "listen_address": { 00:32:01.249 "adrfam": "IPv4", 00:32:01.249 "traddr": "10.0.0.3", 00:32:01.249 "trsvcid": "4420", 00:32:01.249 "trtype": "TCP" 00:32:01.249 }, 00:32:01.249 "peer_address": { 00:32:01.249 "adrfam": "IPv4", 00:32:01.249 "traddr": "10.0.0.1", 00:32:01.249 "trsvcid": "60624", 00:32:01.249 "trtype": "TCP" 00:32:01.249 }, 00:32:01.249 "qid": 0, 00:32:01.249 "state": "enabled", 00:32:01.249 "thread": "nvmf_tgt_poll_group_000" 00:32:01.249 } 00:32:01.249 ]' 00:32:01.249 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:01.249 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:01.249 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:01.508 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:01.508 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:01.508 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:01.508 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:01.508 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:01.766 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:01.766 05:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:02.334 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:02.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:02.334 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:02.334 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.334 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:02.334 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.334 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:02.334 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:02.334 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:02.902 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:32:02.902 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:02.902 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:32:02.902 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:32:02.902 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:32:02.902 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:02.902 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:02.902 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.902 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:02.902 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.902 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:02.902 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:02.902 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:03.161 00:32:03.420 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:03.420 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:03.420 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:03.678 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.678 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:03.678 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.678 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:03.678 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.678 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:03.678 { 00:32:03.678 "auth": { 00:32:03.678 "dhgroup": "ffdhe6144", 00:32:03.678 "digest": "sha256", 00:32:03.678 "state": "completed" 00:32:03.678 }, 00:32:03.678 "cntlid": 37, 00:32:03.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:03.678 "listen_address": { 00:32:03.678 "adrfam": "IPv4", 00:32:03.678 "traddr": "10.0.0.3", 00:32:03.678 "trsvcid": "4420", 00:32:03.678 "trtype": "TCP" 00:32:03.678 }, 00:32:03.678 "peer_address": { 00:32:03.678 "adrfam": "IPv4", 00:32:03.678 "traddr": "10.0.0.1", 00:32:03.678 "trsvcid": "60654", 00:32:03.678 "trtype": "TCP" 00:32:03.678 }, 00:32:03.678 "qid": 0, 00:32:03.678 "state": "enabled", 00:32:03.678 "thread": "nvmf_tgt_poll_group_000" 00:32:03.678 } 00:32:03.679 ]' 00:32:03.679 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:03.679 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:03.679 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:03.679 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:03.679 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:03.679 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:03.679 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:03.679 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:03.937 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:32:03.937 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:32:04.872 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:04.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:04.872 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:04.872 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.872 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:04.872 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.872 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:04.872 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:04.872 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:04.872 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:32:04.872 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:04.872 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:32:04.872 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:32:04.872 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:32:04.872 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:04.872 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:32:04.873 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.873 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:04.873 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.873 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:32:04.873 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:32:04.873 05:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:32:05.439 00:32:05.439 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:05.439 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:05.439 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:05.698 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.698 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:05.698 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.698 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:05.698 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.698 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:05.698 { 00:32:05.699 "auth": { 00:32:05.699 "dhgroup": "ffdhe6144", 00:32:05.699 "digest": "sha256", 00:32:05.699 "state": "completed" 00:32:05.699 }, 00:32:05.699 "cntlid": 39, 00:32:05.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:05.699 "listen_address": { 00:32:05.699 "adrfam": "IPv4", 00:32:05.699 "traddr": "10.0.0.3", 00:32:05.699 "trsvcid": "4420", 00:32:05.699 "trtype": "TCP" 00:32:05.699 }, 00:32:05.699 "peer_address": { 00:32:05.699 "adrfam": "IPv4", 00:32:05.699 "traddr": "10.0.0.1", 00:32:05.699 "trsvcid": "60678", 00:32:05.699 "trtype": "TCP" 00:32:05.699 }, 00:32:05.699 "qid": 0, 00:32:05.699 "state": "enabled", 00:32:05.699 "thread": "nvmf_tgt_poll_group_000" 00:32:05.699 } 00:32:05.699 ]' 00:32:05.699 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:05.958 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:05.958 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:05.958 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:05.958 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:05.958 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:05.958 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:05.958 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:06.217 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:32:06.217 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:32:06.794 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:06.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:06.794 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:06.794 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.794 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:06.794 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.794 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:32:06.794 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:06.794 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:06.794 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:07.069 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:32:07.069 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:07.069 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:32:07.069 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:32:07.069 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:32:07.069 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:07.069 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:07.069 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.069 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.069 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.069 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:07.069 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:07.069 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:07.636 00:32:07.636 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:07.636 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:07.636 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:07.894 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.894 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:07.894 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.894 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.894 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.894 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:07.894 { 00:32:07.894 "auth": { 00:32:07.894 "dhgroup": "ffdhe8192", 00:32:07.894 "digest": "sha256", 00:32:07.894 "state": "completed" 00:32:07.894 }, 00:32:07.894 "cntlid": 41, 00:32:07.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:07.894 "listen_address": { 00:32:07.894 "adrfam": "IPv4", 00:32:07.894 "traddr": "10.0.0.3", 00:32:07.894 "trsvcid": "4420", 00:32:07.894 "trtype": "TCP" 00:32:07.894 }, 00:32:07.894 "peer_address": { 00:32:07.894 "adrfam": "IPv4", 00:32:07.894 "traddr": "10.0.0.1", 00:32:07.894 "trsvcid": "57064", 00:32:07.894 "trtype": "TCP" 00:32:07.894 }, 00:32:07.894 "qid": 0, 00:32:07.894 "state": "enabled", 00:32:07.894 "thread": "nvmf_tgt_poll_group_000" 00:32:07.894 } 00:32:07.894 ]' 00:32:07.894 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:08.152 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:08.152 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:08.152 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:08.152 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:08.152 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:08.152 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:08.152 05:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:08.410 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:32:08.410 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:32:08.978 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:08.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:08.978 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:08.978 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.978 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:08.978 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.978 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:08.978 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:08.978 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:09.236 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:32:09.236 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:09.236 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:32:09.236 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:32:09.236 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:32:09.236 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:09.236 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:09.236 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.236 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:09.236 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.236 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:09.236 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:09.236 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:09.800 00:32:09.800 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:09.800 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:09.800 05:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:10.366 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.366 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:10.366 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.366 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:10.366 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.366 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:10.366 { 00:32:10.366 "auth": { 00:32:10.366 "dhgroup": "ffdhe8192", 00:32:10.366 "digest": "sha256", 00:32:10.366 "state": "completed" 00:32:10.366 }, 00:32:10.366 "cntlid": 43, 00:32:10.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:10.366 "listen_address": { 00:32:10.366 "adrfam": "IPv4", 00:32:10.366 "traddr": "10.0.0.3", 00:32:10.366 "trsvcid": "4420", 00:32:10.366 "trtype": "TCP" 00:32:10.366 }, 00:32:10.366 "peer_address": { 00:32:10.366 "adrfam": "IPv4", 00:32:10.366 "traddr": "10.0.0.1", 00:32:10.366 "trsvcid": "57102", 00:32:10.366 "trtype": "TCP" 00:32:10.366 }, 00:32:10.366 "qid": 0, 00:32:10.366 "state": "enabled", 00:32:10.366 "thread": "nvmf_tgt_poll_group_000" 00:32:10.366 } 00:32:10.366 ]' 00:32:10.366 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:10.366 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:10.366 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:10.366 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:10.366 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:10.366 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:10.366 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:10.366 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:10.624 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:10.624 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:11.191 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:11.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:11.191 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:11.191 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.191 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:11.191 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.191 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:11.191 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:11.191 05:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:11.449 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:32:11.449 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:11.449 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:32:11.449 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:32:11.449 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:32:11.449 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:11.449 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:11.449 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.449 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:11.449 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.449 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:11.449 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:11.449 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:12.017 00:32:12.017 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:12.017 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:12.017 05:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:12.274 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.274 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:12.274 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.274 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:12.274 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.274 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:12.274 { 00:32:12.274 "auth": { 00:32:12.274 "dhgroup": "ffdhe8192", 00:32:12.274 "digest": "sha256", 00:32:12.274 "state": "completed" 00:32:12.274 }, 00:32:12.274 "cntlid": 45, 00:32:12.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:12.274 "listen_address": { 00:32:12.274 "adrfam": "IPv4", 00:32:12.274 "traddr": "10.0.0.3", 00:32:12.274 "trsvcid": "4420", 00:32:12.274 "trtype": "TCP" 00:32:12.274 }, 00:32:12.274 "peer_address": { 00:32:12.274 "adrfam": "IPv4", 00:32:12.274 "traddr": "10.0.0.1", 00:32:12.274 "trsvcid": "57130", 00:32:12.274 "trtype": "TCP" 00:32:12.274 }, 00:32:12.274 "qid": 0, 00:32:12.274 "state": "enabled", 00:32:12.274 "thread": "nvmf_tgt_poll_group_000" 00:32:12.274 } 00:32:12.274 ]' 00:32:12.274 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:12.274 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:12.274 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:12.274 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:12.274 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:12.532 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:12.532 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:12.532 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:12.532 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:32:12.532 05:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:32:13.466 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:13.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:13.466 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:13.466 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.466 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:13.466 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.466 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:13.466 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:13.466 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:13.725 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:32:13.725 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:13.725 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:32:13.725 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:32:13.725 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:32:13.725 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:13.725 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:32:13.725 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.725 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:13.725 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.726 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:32:13.726 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:32:13.726 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:32:14.290 00:32:14.290 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:14.290 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:14.290 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:14.548 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.548 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:14.548 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.548 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:14.548 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.548 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:14.548 { 00:32:14.548 "auth": { 00:32:14.548 "dhgroup": "ffdhe8192", 00:32:14.548 "digest": "sha256", 00:32:14.548 "state": "completed" 00:32:14.548 }, 00:32:14.548 "cntlid": 47, 00:32:14.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:14.548 "listen_address": { 00:32:14.548 "adrfam": "IPv4", 00:32:14.548 "traddr": "10.0.0.3", 00:32:14.548 "trsvcid": "4420", 00:32:14.548 "trtype": "TCP" 00:32:14.548 }, 00:32:14.548 "peer_address": { 00:32:14.548 "adrfam": "IPv4", 00:32:14.548 "traddr": "10.0.0.1", 00:32:14.548 "trsvcid": "57158", 00:32:14.548 "trtype": "TCP" 00:32:14.548 }, 00:32:14.548 "qid": 0, 00:32:14.548 "state": "enabled", 00:32:14.548 "thread": "nvmf_tgt_poll_group_000" 00:32:14.548 } 00:32:14.548 ]' 00:32:14.806 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:14.806 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:32:14.806 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:14.806 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:14.806 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:14.806 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:14.806 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:14.806 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:15.063 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:32:15.063 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:15.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:15.997 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:16.255 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.255 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:16.255 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:16.255 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:16.513 00:32:16.513 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:16.513 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:16.513 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:16.513 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.513 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:16.513 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.513 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:16.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:16.772 { 00:32:16.772 "auth": { 00:32:16.772 "dhgroup": "null", 00:32:16.772 "digest": "sha384", 00:32:16.772 "state": "completed" 00:32:16.772 }, 00:32:16.772 "cntlid": 49, 00:32:16.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:16.772 "listen_address": { 00:32:16.772 "adrfam": "IPv4", 00:32:16.772 "traddr": "10.0.0.3", 00:32:16.772 "trsvcid": "4420", 00:32:16.772 "trtype": "TCP" 00:32:16.772 }, 00:32:16.772 "peer_address": { 00:32:16.772 "adrfam": "IPv4", 00:32:16.772 "traddr": "10.0.0.1", 00:32:16.772 "trsvcid": "57168", 00:32:16.772 "trtype": "TCP" 00:32:16.772 }, 00:32:16.772 "qid": 0, 00:32:16.772 "state": "enabled", 00:32:16.772 "thread": "nvmf_tgt_poll_group_000" 00:32:16.772 } 00:32:16.772 ]' 00:32:16.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:16.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:16.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:16.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:32:16.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:16.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:16.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:16.772 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:17.031 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:32:17.031 05:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:32:17.599 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:17.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:17.599 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:17.599 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.599 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:17.599 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.599 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:17.599 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:17.599 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:17.858 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:32:17.858 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:17.858 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:17.858 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:32:17.858 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:32:17.858 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:17.858 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:17.858 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.858 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:17.858 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.858 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:17.858 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:17.859 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:18.117 00:32:18.117 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:18.117 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:18.117 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:18.376 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.376 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:18.376 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.376 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:18.376 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.634 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:18.634 { 00:32:18.634 "auth": { 00:32:18.634 "dhgroup": "null", 00:32:18.634 "digest": "sha384", 00:32:18.634 "state": "completed" 00:32:18.634 }, 00:32:18.634 "cntlid": 51, 00:32:18.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:18.634 "listen_address": { 00:32:18.634 "adrfam": "IPv4", 00:32:18.634 "traddr": "10.0.0.3", 00:32:18.634 "trsvcid": "4420", 00:32:18.634 "trtype": "TCP" 00:32:18.634 }, 00:32:18.634 "peer_address": { 00:32:18.634 "adrfam": "IPv4", 00:32:18.634 "traddr": "10.0.0.1", 00:32:18.634 "trsvcid": "52632", 00:32:18.634 "trtype": "TCP" 00:32:18.634 }, 00:32:18.634 "qid": 0, 00:32:18.634 "state": "enabled", 00:32:18.634 "thread": "nvmf_tgt_poll_group_000" 00:32:18.634 } 00:32:18.634 ]' 00:32:18.634 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:18.634 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:18.634 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:18.634 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:32:18.634 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:18.634 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:18.634 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:18.634 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:18.893 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:18.893 05:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:19.460 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:19.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:19.460 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:19.460 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.460 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:19.718 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.718 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:19.718 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:19.718 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:19.977 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:32:19.977 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:19.977 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:19.977 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:32:19.977 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:32:19.977 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:19.977 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:19.977 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.977 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:19.977 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.977 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:19.977 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:19.977 05:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:20.236 00:32:20.236 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:20.236 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:20.236 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:20.572 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.572 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:20.572 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.572 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:20.572 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.572 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:20.572 { 00:32:20.572 "auth": { 00:32:20.572 "dhgroup": "null", 00:32:20.572 "digest": "sha384", 00:32:20.572 "state": "completed" 00:32:20.572 }, 00:32:20.572 "cntlid": 53, 00:32:20.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:20.572 "listen_address": { 00:32:20.572 "adrfam": "IPv4", 00:32:20.572 "traddr": "10.0.0.3", 00:32:20.572 "trsvcid": "4420", 00:32:20.572 "trtype": "TCP" 00:32:20.572 }, 00:32:20.572 "peer_address": { 00:32:20.572 "adrfam": "IPv4", 00:32:20.572 "traddr": "10.0.0.1", 00:32:20.572 "trsvcid": "52652", 00:32:20.572 "trtype": "TCP" 00:32:20.572 }, 00:32:20.572 "qid": 0, 00:32:20.572 "state": "enabled", 00:32:20.572 "thread": "nvmf_tgt_poll_group_000" 00:32:20.572 } 00:32:20.572 ]' 00:32:20.572 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:20.572 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:20.572 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:20.572 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:32:20.572 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:20.832 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:20.832 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:20.832 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:20.832 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:32:20.832 05:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:32:21.399 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:21.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:21.399 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:21.399 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.399 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:21.399 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.399 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:21.399 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:21.399 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:32:21.658 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:32:21.658 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:21.658 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:21.658 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:32:21.658 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:32:21.658 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:21.658 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:32:21.658 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.658 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:21.658 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.658 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:32:21.658 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:32:21.658 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:32:21.917 00:32:21.917 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:21.917 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:21.917 05:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:22.175 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.175 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:22.175 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.175 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:22.434 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.434 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:22.434 { 00:32:22.434 "auth": { 00:32:22.434 "dhgroup": "null", 00:32:22.434 "digest": "sha384", 00:32:22.434 "state": "completed" 00:32:22.434 }, 00:32:22.434 "cntlid": 55, 00:32:22.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:22.434 "listen_address": { 00:32:22.434 "adrfam": "IPv4", 00:32:22.434 "traddr": "10.0.0.3", 00:32:22.434 "trsvcid": "4420", 00:32:22.434 "trtype": "TCP" 00:32:22.434 }, 00:32:22.434 "peer_address": { 00:32:22.434 "adrfam": "IPv4", 00:32:22.434 "traddr": "10.0.0.1", 00:32:22.434 "trsvcid": "52670", 00:32:22.434 "trtype": "TCP" 00:32:22.434 }, 00:32:22.434 "qid": 0, 00:32:22.434 "state": "enabled", 00:32:22.434 "thread": "nvmf_tgt_poll_group_000" 00:32:22.434 } 00:32:22.434 ]' 00:32:22.434 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:22.434 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:22.434 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:22.434 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:32:22.434 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:22.434 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:22.434 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:22.434 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:22.692 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:32:22.693 05:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:32:23.259 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:23.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:23.259 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:23.259 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.259 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:23.259 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.259 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:32:23.259 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:23.259 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:23.259 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:23.518 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:32:23.518 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:23.518 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:23.518 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:32:23.518 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:32:23.518 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:23.518 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:23.518 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.518 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:23.518 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.518 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:23.518 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:23.518 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:23.777 00:32:23.777 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:23.777 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:23.777 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:24.037 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.037 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:24.037 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.037 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:24.037 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.037 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:24.037 { 00:32:24.037 "auth": { 00:32:24.037 "dhgroup": "ffdhe2048", 00:32:24.037 "digest": "sha384", 00:32:24.037 "state": "completed" 00:32:24.037 }, 00:32:24.037 "cntlid": 57, 00:32:24.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:24.037 "listen_address": { 00:32:24.037 "adrfam": "IPv4", 00:32:24.037 "traddr": "10.0.0.3", 00:32:24.037 "trsvcid": "4420", 00:32:24.037 "trtype": "TCP" 00:32:24.037 }, 00:32:24.037 "peer_address": { 00:32:24.037 "adrfam": "IPv4", 00:32:24.037 "traddr": "10.0.0.1", 00:32:24.037 "trsvcid": "52686", 00:32:24.037 "trtype": "TCP" 00:32:24.037 }, 00:32:24.037 "qid": 0, 00:32:24.037 "state": "enabled", 00:32:24.037 "thread": "nvmf_tgt_poll_group_000" 00:32:24.037 } 00:32:24.037 ]' 00:32:24.037 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:24.037 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:24.037 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:24.296 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:24.296 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:24.296 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:24.296 05:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:24.296 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:24.555 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:32:24.555 05:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:32:25.122 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:25.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:25.122 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:25.122 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.122 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:25.122 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.122 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:25.122 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:25.122 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:25.381 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:32:25.381 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:25.381 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:25.381 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:32:25.381 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:32:25.381 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:25.381 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:25.381 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.381 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:25.381 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.381 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:25.381 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:25.381 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:25.948 00:32:25.948 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:25.948 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:25.948 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:26.206 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.206 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:26.206 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.206 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:26.206 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.206 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:26.206 { 00:32:26.206 "auth": { 00:32:26.206 "dhgroup": "ffdhe2048", 00:32:26.206 "digest": "sha384", 00:32:26.206 "state": "completed" 00:32:26.206 }, 00:32:26.206 "cntlid": 59, 00:32:26.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:26.206 "listen_address": { 00:32:26.206 "adrfam": "IPv4", 00:32:26.206 "traddr": "10.0.0.3", 00:32:26.206 "trsvcid": "4420", 00:32:26.206 "trtype": "TCP" 00:32:26.206 }, 00:32:26.206 "peer_address": { 00:32:26.206 "adrfam": "IPv4", 00:32:26.206 "traddr": "10.0.0.1", 00:32:26.206 "trsvcid": "52728", 00:32:26.206 "trtype": "TCP" 00:32:26.206 }, 00:32:26.206 "qid": 0, 00:32:26.206 "state": "enabled", 00:32:26.206 "thread": "nvmf_tgt_poll_group_000" 00:32:26.206 } 00:32:26.206 ]' 00:32:26.206 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:26.206 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:26.206 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:26.206 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:26.206 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:26.206 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:26.206 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:26.206 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:26.464 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:26.464 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:27.400 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:27.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:27.400 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:27.400 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.400 05:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:27.400 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:28.010 00:32:28.010 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:28.010 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:28.010 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:28.010 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.010 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:28.010 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.010 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:28.010 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.010 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:28.010 { 00:32:28.010 "auth": { 00:32:28.010 "dhgroup": "ffdhe2048", 00:32:28.010 "digest": "sha384", 00:32:28.010 "state": "completed" 00:32:28.010 }, 00:32:28.010 "cntlid": 61, 00:32:28.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:28.010 "listen_address": { 00:32:28.010 "adrfam": "IPv4", 00:32:28.010 "traddr": "10.0.0.3", 00:32:28.010 "trsvcid": "4420", 00:32:28.010 "trtype": "TCP" 00:32:28.010 }, 00:32:28.010 "peer_address": { 00:32:28.010 "adrfam": "IPv4", 00:32:28.010 "traddr": "10.0.0.1", 00:32:28.010 "trsvcid": "33952", 00:32:28.010 "trtype": "TCP" 00:32:28.010 }, 00:32:28.010 "qid": 0, 00:32:28.010 "state": "enabled", 00:32:28.010 "thread": "nvmf_tgt_poll_group_000" 00:32:28.010 } 00:32:28.010 ]' 00:32:28.268 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:28.268 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:28.268 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:28.268 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:28.268 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:28.268 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:28.268 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:28.268 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:28.527 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:32:28.527 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:32:29.093 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:29.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:29.093 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:29.093 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.093 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.093 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.093 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:29.093 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:29.093 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:29.352 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:32:29.352 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:29.352 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:29.352 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:32:29.352 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:32:29.352 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:29.352 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:32:29.352 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.352 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.352 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.352 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:32:29.352 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:32:29.352 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:32:29.920 00:32:29.920 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:29.920 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:29.921 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:30.179 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.179 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:30.179 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.179 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:30.179 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.179 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:30.179 { 00:32:30.179 "auth": { 00:32:30.179 "dhgroup": "ffdhe2048", 00:32:30.179 "digest": "sha384", 00:32:30.179 "state": "completed" 00:32:30.179 }, 00:32:30.179 "cntlid": 63, 00:32:30.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:30.179 "listen_address": { 00:32:30.179 "adrfam": "IPv4", 00:32:30.179 "traddr": "10.0.0.3", 00:32:30.179 "trsvcid": "4420", 00:32:30.179 "trtype": "TCP" 00:32:30.179 }, 00:32:30.179 "peer_address": { 00:32:30.179 "adrfam": "IPv4", 00:32:30.179 "traddr": "10.0.0.1", 00:32:30.179 "trsvcid": "33966", 00:32:30.179 "trtype": "TCP" 00:32:30.179 }, 00:32:30.179 "qid": 0, 00:32:30.179 "state": "enabled", 00:32:30.179 "thread": "nvmf_tgt_poll_group_000" 00:32:30.179 } 00:32:30.179 ]' 00:32:30.179 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:30.179 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:30.179 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:30.179 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:32:30.179 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:30.179 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:30.179 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:30.179 05:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:30.438 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:32:30.438 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:32:31.372 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:31.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:31.372 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:31.373 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.373 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:31.373 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.373 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:32:31.373 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:31.373 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:31.373 05:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:31.373 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:32:31.373 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:31.373 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:31.373 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:32:31.373 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:32:31.373 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:31.373 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:31.373 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.373 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:31.373 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.373 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:31.373 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:31.373 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:31.631 00:32:31.631 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:31.631 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:31.631 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:31.890 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.890 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:31.890 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.890 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:31.890 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.890 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:31.890 { 00:32:31.890 "auth": { 00:32:31.890 "dhgroup": "ffdhe3072", 00:32:31.890 "digest": "sha384", 00:32:31.890 "state": "completed" 00:32:31.890 }, 00:32:31.890 "cntlid": 65, 00:32:31.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:31.890 "listen_address": { 00:32:31.890 "adrfam": "IPv4", 00:32:31.890 "traddr": "10.0.0.3", 00:32:31.890 "trsvcid": "4420", 00:32:31.890 "trtype": "TCP" 00:32:31.890 }, 00:32:31.890 "peer_address": { 00:32:31.890 "adrfam": "IPv4", 00:32:31.890 "traddr": "10.0.0.1", 00:32:31.890 "trsvcid": "34000", 00:32:31.890 "trtype": "TCP" 00:32:31.890 }, 00:32:31.890 "qid": 0, 00:32:31.890 "state": "enabled", 00:32:31.890 "thread": "nvmf_tgt_poll_group_000" 00:32:31.890 } 00:32:31.890 ]' 00:32:31.890 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:31.890 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:31.890 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:32.148 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:32.148 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:32.148 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:32.148 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:32.148 05:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:32.406 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:32:32.406 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:32:32.972 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:32.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:32.972 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:32.972 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.972 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:32.972 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.972 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:32.972 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:32.972 05:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:33.231 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:32:33.231 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:33.231 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:33.231 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:32:33.231 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:32:33.231 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:33.231 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.231 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.231 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:33.231 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.231 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.231 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.231 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.490 00:32:33.490 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:33.490 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:33.490 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:33.768 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.768 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:33.768 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.768 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:34.028 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.028 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:34.028 { 00:32:34.028 "auth": { 00:32:34.028 "dhgroup": "ffdhe3072", 00:32:34.028 "digest": "sha384", 00:32:34.028 "state": "completed" 00:32:34.028 }, 00:32:34.028 "cntlid": 67, 00:32:34.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:34.028 "listen_address": { 00:32:34.028 "adrfam": "IPv4", 00:32:34.028 "traddr": "10.0.0.3", 00:32:34.028 "trsvcid": "4420", 00:32:34.028 "trtype": "TCP" 00:32:34.028 }, 00:32:34.028 "peer_address": { 00:32:34.028 "adrfam": "IPv4", 00:32:34.028 "traddr": "10.0.0.1", 00:32:34.028 "trsvcid": "34026", 00:32:34.028 "trtype": "TCP" 00:32:34.028 }, 00:32:34.028 "qid": 0, 00:32:34.028 "state": "enabled", 00:32:34.028 "thread": "nvmf_tgt_poll_group_000" 00:32:34.028 } 00:32:34.028 ]' 00:32:34.028 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:34.028 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:34.028 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:34.028 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:34.028 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:34.028 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:34.028 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:34.028 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:34.287 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:34.287 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:34.852 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:34.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:34.852 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:34.852 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.852 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:34.852 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.852 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:34.852 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:34.852 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:35.112 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:32:35.112 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:35.112 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:35.112 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:32:35.112 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:32:35.112 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:35.112 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.112 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.112 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:35.112 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.112 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.112 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.112 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.370 00:32:35.370 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:35.370 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:35.370 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:35.630 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.630 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:35.630 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.630 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:35.630 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.630 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:35.630 { 00:32:35.630 "auth": { 00:32:35.630 "dhgroup": "ffdhe3072", 00:32:35.630 "digest": "sha384", 00:32:35.630 "state": "completed" 00:32:35.630 }, 00:32:35.630 "cntlid": 69, 00:32:35.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:35.630 "listen_address": { 00:32:35.630 "adrfam": "IPv4", 00:32:35.630 "traddr": "10.0.0.3", 00:32:35.630 "trsvcid": "4420", 00:32:35.630 "trtype": "TCP" 00:32:35.630 }, 00:32:35.630 "peer_address": { 00:32:35.630 "adrfam": "IPv4", 00:32:35.630 "traddr": "10.0.0.1", 00:32:35.630 "trsvcid": "34056", 00:32:35.630 "trtype": "TCP" 00:32:35.630 }, 00:32:35.630 "qid": 0, 00:32:35.630 "state": "enabled", 00:32:35.630 "thread": "nvmf_tgt_poll_group_000" 00:32:35.630 } 00:32:35.630 ]' 00:32:35.630 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:35.630 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:35.630 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:35.889 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:35.889 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:35.889 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:35.889 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:35.889 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:36.149 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:32:36.149 05:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:32:36.717 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:36.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:36.717 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:36.717 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.717 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:36.717 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.717 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:36.717 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:36.717 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:36.976 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:32:36.976 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:36.976 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:36.976 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:32:36.976 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:32:36.976 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:36.976 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:32:36.976 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.976 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:36.976 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.976 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:32:36.976 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:32:36.976 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:32:37.235 00:32:37.235 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:37.235 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:37.235 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:37.495 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.495 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:37.495 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.495 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:37.495 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.495 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:37.495 { 00:32:37.495 "auth": { 00:32:37.495 "dhgroup": "ffdhe3072", 00:32:37.495 "digest": "sha384", 00:32:37.495 "state": "completed" 00:32:37.495 }, 00:32:37.495 "cntlid": 71, 00:32:37.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:37.495 "listen_address": { 00:32:37.495 "adrfam": "IPv4", 00:32:37.495 "traddr": "10.0.0.3", 00:32:37.495 "trsvcid": "4420", 00:32:37.495 "trtype": "TCP" 00:32:37.495 }, 00:32:37.495 "peer_address": { 00:32:37.495 "adrfam": "IPv4", 00:32:37.495 "traddr": "10.0.0.1", 00:32:37.495 "trsvcid": "48516", 00:32:37.495 "trtype": "TCP" 00:32:37.495 }, 00:32:37.495 "qid": 0, 00:32:37.495 "state": "enabled", 00:32:37.495 "thread": "nvmf_tgt_poll_group_000" 00:32:37.495 } 00:32:37.495 ]' 00:32:37.495 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:37.495 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:37.753 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:37.753 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:32:37.753 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:37.753 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:37.753 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:37.753 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:38.011 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:32:38.011 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:32:38.578 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:38.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:38.578 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:38.578 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.578 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:38.578 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.578 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:32:38.578 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:38.578 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:38.578 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:38.838 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:32:38.838 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:38.838 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:38.838 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:32:38.838 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:32:38.838 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:38.838 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.838 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.838 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:38.838 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.838 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.838 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.838 05:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:39.414 00:32:39.414 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:39.414 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:39.414 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:39.674 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.674 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:39.674 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.674 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:39.674 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.674 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:39.674 { 00:32:39.674 "auth": { 00:32:39.674 "dhgroup": "ffdhe4096", 00:32:39.674 "digest": "sha384", 00:32:39.674 "state": "completed" 00:32:39.674 }, 00:32:39.674 "cntlid": 73, 00:32:39.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:39.674 "listen_address": { 00:32:39.674 "adrfam": "IPv4", 00:32:39.674 "traddr": "10.0.0.3", 00:32:39.674 "trsvcid": "4420", 00:32:39.674 "trtype": "TCP" 00:32:39.674 }, 00:32:39.674 "peer_address": { 00:32:39.674 "adrfam": "IPv4", 00:32:39.674 "traddr": "10.0.0.1", 00:32:39.674 "trsvcid": "48550", 00:32:39.674 "trtype": "TCP" 00:32:39.674 }, 00:32:39.674 "qid": 0, 00:32:39.674 "state": "enabled", 00:32:39.674 "thread": "nvmf_tgt_poll_group_000" 00:32:39.674 } 00:32:39.674 ]' 00:32:39.674 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:39.674 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:39.674 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:39.674 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:39.674 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:39.674 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:39.674 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:39.674 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:39.934 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:32:39.934 05:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:32:40.871 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:40.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:40.871 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:40.871 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:40.872 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:41.439 00:32:41.439 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:41.439 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:41.439 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:41.698 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.698 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:41.698 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.698 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:41.698 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.698 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:41.698 { 00:32:41.698 "auth": { 00:32:41.698 "dhgroup": "ffdhe4096", 00:32:41.698 "digest": "sha384", 00:32:41.698 "state": "completed" 00:32:41.698 }, 00:32:41.698 "cntlid": 75, 00:32:41.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:41.698 "listen_address": { 00:32:41.698 "adrfam": "IPv4", 00:32:41.698 "traddr": "10.0.0.3", 00:32:41.698 "trsvcid": "4420", 00:32:41.698 "trtype": "TCP" 00:32:41.698 }, 00:32:41.698 "peer_address": { 00:32:41.698 "adrfam": "IPv4", 00:32:41.698 "traddr": "10.0.0.1", 00:32:41.698 "trsvcid": "48578", 00:32:41.698 "trtype": "TCP" 00:32:41.698 }, 00:32:41.698 "qid": 0, 00:32:41.698 "state": "enabled", 00:32:41.698 "thread": "nvmf_tgt_poll_group_000" 00:32:41.698 } 00:32:41.698 ]' 00:32:41.698 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:41.698 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:41.698 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:41.698 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:41.698 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:41.698 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:41.698 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:41.698 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:41.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:41.958 05:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:42.524 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:42.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:42.524 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:42.524 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.524 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:42.524 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.524 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:42.524 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:42.525 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:42.782 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:32:42.782 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:42.782 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:42.782 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:32:42.783 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:32:42.783 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:42.783 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:42.783 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.783 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:42.783 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.783 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:42.783 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:42.783 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:43.349 00:32:43.349 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:43.349 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:43.349 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:43.607 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.607 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:43.607 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.607 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:43.607 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.607 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:43.607 { 00:32:43.607 "auth": { 00:32:43.607 "dhgroup": "ffdhe4096", 00:32:43.607 "digest": "sha384", 00:32:43.607 "state": "completed" 00:32:43.607 }, 00:32:43.607 "cntlid": 77, 00:32:43.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:43.607 "listen_address": { 00:32:43.607 "adrfam": "IPv4", 00:32:43.607 "traddr": "10.0.0.3", 00:32:43.607 "trsvcid": "4420", 00:32:43.607 "trtype": "TCP" 00:32:43.607 }, 00:32:43.607 "peer_address": { 00:32:43.607 "adrfam": "IPv4", 00:32:43.607 "traddr": "10.0.0.1", 00:32:43.607 "trsvcid": "48598", 00:32:43.607 "trtype": "TCP" 00:32:43.607 }, 00:32:43.607 "qid": 0, 00:32:43.607 "state": "enabled", 00:32:43.607 "thread": "nvmf_tgt_poll_group_000" 00:32:43.607 } 00:32:43.607 ]' 00:32:43.607 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:43.607 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:43.607 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:43.607 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:43.607 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:43.607 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:43.608 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:43.608 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:43.866 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:32:43.866 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:44.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:32:44.802 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:32:45.369 00:32:45.369 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:45.369 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:45.369 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:45.627 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.627 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:45.627 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.627 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.627 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.627 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:45.627 { 00:32:45.627 "auth": { 00:32:45.627 "dhgroup": "ffdhe4096", 00:32:45.627 "digest": "sha384", 00:32:45.627 "state": "completed" 00:32:45.627 }, 00:32:45.627 "cntlid": 79, 00:32:45.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:45.627 "listen_address": { 00:32:45.627 "adrfam": "IPv4", 00:32:45.627 "traddr": "10.0.0.3", 00:32:45.627 "trsvcid": "4420", 00:32:45.627 "trtype": "TCP" 00:32:45.627 }, 00:32:45.627 "peer_address": { 00:32:45.627 "adrfam": "IPv4", 00:32:45.627 "traddr": "10.0.0.1", 00:32:45.627 "trsvcid": "48616", 00:32:45.627 "trtype": "TCP" 00:32:45.627 }, 00:32:45.627 "qid": 0, 00:32:45.627 "state": "enabled", 00:32:45.627 "thread": "nvmf_tgt_poll_group_000" 00:32:45.627 } 00:32:45.627 ]' 00:32:45.627 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:45.627 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:45.627 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:45.627 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:32:45.627 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:45.886 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:45.886 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:45.886 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:46.145 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:32:46.145 05:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:32:46.712 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:46.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:46.712 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:46.712 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.712 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.712 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.712 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:32:46.712 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:46.712 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:46.712 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:46.971 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:32:46.971 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:46.971 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:46.971 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:32:46.971 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:32:46.971 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:46.971 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:46.971 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.971 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.971 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.971 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:46.971 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:46.971 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.230 00:32:47.230 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:47.230 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:47.230 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:47.797 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.797 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:47.797 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.797 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:47.797 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.797 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:47.797 { 00:32:47.797 "auth": { 00:32:47.797 "dhgroup": "ffdhe6144", 00:32:47.797 "digest": "sha384", 00:32:47.797 "state": "completed" 00:32:47.797 }, 00:32:47.797 "cntlid": 81, 00:32:47.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:47.797 "listen_address": { 00:32:47.797 "adrfam": "IPv4", 00:32:47.797 "traddr": "10.0.0.3", 00:32:47.797 "trsvcid": "4420", 00:32:47.797 "trtype": "TCP" 00:32:47.797 }, 00:32:47.797 "peer_address": { 00:32:47.797 "adrfam": "IPv4", 00:32:47.797 "traddr": "10.0.0.1", 00:32:47.797 "trsvcid": "45542", 00:32:47.797 "trtype": "TCP" 00:32:47.797 }, 00:32:47.797 "qid": 0, 00:32:47.797 "state": "enabled", 00:32:47.797 "thread": "nvmf_tgt_poll_group_000" 00:32:47.797 } 00:32:47.797 ]' 00:32:47.797 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:47.797 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:47.797 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:47.797 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:47.797 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:47.797 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:47.797 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:47.798 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:48.056 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:32:48.056 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:32:48.623 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:48.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:48.624 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:48.624 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.624 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:48.624 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.624 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:48.624 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:48.624 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:48.883 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:32:48.883 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:48.883 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:48.883 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:32:48.883 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:32:48.883 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:48.883 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:48.883 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.883 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:48.883 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.883 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:48.883 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:48.883 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:49.146 00:32:49.413 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:49.413 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:49.413 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:49.675 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.675 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:49.675 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.675 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:49.675 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.675 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:49.675 { 00:32:49.675 "auth": { 00:32:49.675 "dhgroup": "ffdhe6144", 00:32:49.675 "digest": "sha384", 00:32:49.675 "state": "completed" 00:32:49.675 }, 00:32:49.675 "cntlid": 83, 00:32:49.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:49.675 "listen_address": { 00:32:49.675 "adrfam": "IPv4", 00:32:49.675 "traddr": "10.0.0.3", 00:32:49.675 "trsvcid": "4420", 00:32:49.675 "trtype": "TCP" 00:32:49.675 }, 00:32:49.675 "peer_address": { 00:32:49.675 "adrfam": "IPv4", 00:32:49.675 "traddr": "10.0.0.1", 00:32:49.675 "trsvcid": "45578", 00:32:49.675 "trtype": "TCP" 00:32:49.675 }, 00:32:49.675 "qid": 0, 00:32:49.675 "state": "enabled", 00:32:49.675 "thread": "nvmf_tgt_poll_group_000" 00:32:49.675 } 00:32:49.675 ]' 00:32:49.675 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:49.675 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:49.675 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:49.675 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:49.675 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:49.675 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:49.675 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:49.675 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:49.934 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:49.934 05:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:50.502 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:50.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:50.502 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:50.502 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.502 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.502 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.502 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:50.502 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:50.502 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:50.761 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:32:50.761 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:50.761 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:50.761 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:32:50.761 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:32:50.761 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:50.761 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:50.761 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.761 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:50.761 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.761 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:50.761 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:50.761 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:51.329 00:32:51.329 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:51.329 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:51.329 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:51.588 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.588 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:51.588 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.588 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:51.588 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.588 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:51.588 { 00:32:51.588 "auth": { 00:32:51.588 "dhgroup": "ffdhe6144", 00:32:51.588 "digest": "sha384", 00:32:51.588 "state": "completed" 00:32:51.588 }, 00:32:51.588 "cntlid": 85, 00:32:51.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:51.588 "listen_address": { 00:32:51.588 "adrfam": "IPv4", 00:32:51.588 "traddr": "10.0.0.3", 00:32:51.588 "trsvcid": "4420", 00:32:51.588 "trtype": "TCP" 00:32:51.588 }, 00:32:51.588 "peer_address": { 00:32:51.588 "adrfam": "IPv4", 00:32:51.588 "traddr": "10.0.0.1", 00:32:51.588 "trsvcid": "45594", 00:32:51.588 "trtype": "TCP" 00:32:51.588 }, 00:32:51.588 "qid": 0, 00:32:51.588 "state": "enabled", 00:32:51.588 "thread": "nvmf_tgt_poll_group_000" 00:32:51.588 } 00:32:51.588 ]' 00:32:51.588 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:51.588 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:51.588 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:51.588 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:51.588 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:51.588 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:51.588 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:51.588 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:51.846 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:32:51.846 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:52.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:32:52.782 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:32:53.350 00:32:53.350 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:53.350 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:53.350 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:53.609 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.609 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:53.609 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.609 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:53.609 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.609 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:53.609 { 00:32:53.609 "auth": { 00:32:53.609 "dhgroup": "ffdhe6144", 00:32:53.609 "digest": "sha384", 00:32:53.609 "state": "completed" 00:32:53.609 }, 00:32:53.609 "cntlid": 87, 00:32:53.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:53.609 "listen_address": { 00:32:53.609 "adrfam": "IPv4", 00:32:53.609 "traddr": "10.0.0.3", 00:32:53.609 "trsvcid": "4420", 00:32:53.609 "trtype": "TCP" 00:32:53.609 }, 00:32:53.609 "peer_address": { 00:32:53.609 "adrfam": "IPv4", 00:32:53.609 "traddr": "10.0.0.1", 00:32:53.609 "trsvcid": "45628", 00:32:53.609 "trtype": "TCP" 00:32:53.609 }, 00:32:53.609 "qid": 0, 00:32:53.609 "state": "enabled", 00:32:53.609 "thread": "nvmf_tgt_poll_group_000" 00:32:53.609 } 00:32:53.609 ]' 00:32:53.609 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:53.609 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:53.609 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:53.609 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:32:53.609 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:53.869 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:53.869 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:53.869 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:53.869 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:32:53.869 05:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:32:54.807 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:54.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:54.807 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:54.807 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.808 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:54.808 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.808 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:32:54.808 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:54.808 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:54.808 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:54.808 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:32:54.808 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:54.808 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:54.808 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:32:54.808 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:32:54.808 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:54.808 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:54.808 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.808 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:55.070 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.070 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.070 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.070 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.638 00:32:55.638 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:55.638 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:55.638 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:55.897 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.897 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:55.897 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.897 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:55.897 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.897 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:55.897 { 00:32:55.897 "auth": { 00:32:55.897 "dhgroup": "ffdhe8192", 00:32:55.897 "digest": "sha384", 00:32:55.897 "state": "completed" 00:32:55.897 }, 00:32:55.897 "cntlid": 89, 00:32:55.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:55.897 "listen_address": { 00:32:55.897 "adrfam": "IPv4", 00:32:55.897 "traddr": "10.0.0.3", 00:32:55.897 "trsvcid": "4420", 00:32:55.897 "trtype": "TCP" 00:32:55.897 }, 00:32:55.897 "peer_address": { 00:32:55.897 "adrfam": "IPv4", 00:32:55.897 "traddr": "10.0.0.1", 00:32:55.897 "trsvcid": "45638", 00:32:55.897 "trtype": "TCP" 00:32:55.897 }, 00:32:55.897 "qid": 0, 00:32:55.897 "state": "enabled", 00:32:55.897 "thread": "nvmf_tgt_poll_group_000" 00:32:55.897 } 00:32:55.897 ]' 00:32:55.897 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:55.897 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:55.897 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:55.897 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:55.897 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:55.897 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:55.897 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:55.897 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:56.156 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:32:56.156 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:32:57.093 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:57.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:57.093 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:57.093 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.093 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:57.093 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.093 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:57.093 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:57.093 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:57.352 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:32:57.352 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:57.352 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:57.352 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:32:57.352 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:32:57.352 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:57.352 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:57.352 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.352 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:57.352 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.352 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:57.352 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:57.352 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:57.920 00:32:57.920 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:32:57.920 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:32:57.920 05:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:32:58.489 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.489 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:32:58.489 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.489 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:58.489 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.489 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:32:58.489 { 00:32:58.489 "auth": { 00:32:58.489 "dhgroup": "ffdhe8192", 00:32:58.489 "digest": "sha384", 00:32:58.489 "state": "completed" 00:32:58.489 }, 00:32:58.489 "cntlid": 91, 00:32:58.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:32:58.489 "listen_address": { 00:32:58.489 "adrfam": "IPv4", 00:32:58.489 "traddr": "10.0.0.3", 00:32:58.489 "trsvcid": "4420", 00:32:58.489 "trtype": "TCP" 00:32:58.489 }, 00:32:58.489 "peer_address": { 00:32:58.489 "adrfam": "IPv4", 00:32:58.489 "traddr": "10.0.0.1", 00:32:58.489 "trsvcid": "59778", 00:32:58.489 "trtype": "TCP" 00:32:58.489 }, 00:32:58.489 "qid": 0, 00:32:58.489 "state": "enabled", 00:32:58.489 "thread": "nvmf_tgt_poll_group_000" 00:32:58.489 } 00:32:58.489 ]' 00:32:58.489 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:32:58.489 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:32:58.489 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:32:58.489 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:32:58.489 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:32:58.489 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:32:58.489 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:32:58.489 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:32:58.748 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:58.748 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:32:59.690 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:32:59.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:32:59.691 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:32:59.691 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.691 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:59.691 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.691 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:32:59.691 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:59.691 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:59.953 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:32:59.953 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:32:59.953 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:32:59.953 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:32:59.953 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:32:59.953 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:32:59.953 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.953 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.953 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:32:59.953 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.953 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.953 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.953 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:00.889 00:33:00.889 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:00.889 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:00.889 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:01.165 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.165 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:01.165 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.165 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:01.165 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.165 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:01.165 { 00:33:01.165 "auth": { 00:33:01.165 "dhgroup": "ffdhe8192", 00:33:01.165 "digest": "sha384", 00:33:01.165 "state": "completed" 00:33:01.165 }, 00:33:01.165 "cntlid": 93, 00:33:01.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:01.165 "listen_address": { 00:33:01.165 "adrfam": "IPv4", 00:33:01.165 "traddr": "10.0.0.3", 00:33:01.165 "trsvcid": "4420", 00:33:01.165 "trtype": "TCP" 00:33:01.165 }, 00:33:01.165 "peer_address": { 00:33:01.165 "adrfam": "IPv4", 00:33:01.165 "traddr": "10.0.0.1", 00:33:01.165 "trsvcid": "59792", 00:33:01.165 "trtype": "TCP" 00:33:01.165 }, 00:33:01.165 "qid": 0, 00:33:01.165 "state": "enabled", 00:33:01.165 "thread": "nvmf_tgt_poll_group_000" 00:33:01.165 } 00:33:01.165 ]' 00:33:01.165 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:01.165 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:01.165 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:01.165 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:01.165 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:01.165 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:01.165 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:01.165 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:01.460 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:33:01.460 05:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:33:02.396 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:02.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:02.396 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:02.396 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.396 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:02.396 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.396 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:02.396 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:02.396 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:02.655 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:33:02.655 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:02.655 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:33:02.655 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:33:02.655 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:33:02.655 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:02.655 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:33:02.655 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.655 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:02.655 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.655 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:33:02.655 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:02.655 05:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:03.222 00:33:03.222 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:03.222 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:03.222 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:03.480 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.480 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:03.480 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.480 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:03.480 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.480 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:03.480 { 00:33:03.480 "auth": { 00:33:03.480 "dhgroup": "ffdhe8192", 00:33:03.480 "digest": "sha384", 00:33:03.480 "state": "completed" 00:33:03.480 }, 00:33:03.480 "cntlid": 95, 00:33:03.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:03.480 "listen_address": { 00:33:03.480 "adrfam": "IPv4", 00:33:03.480 "traddr": "10.0.0.3", 00:33:03.480 "trsvcid": "4420", 00:33:03.480 "trtype": "TCP" 00:33:03.480 }, 00:33:03.480 "peer_address": { 00:33:03.480 "adrfam": "IPv4", 00:33:03.480 "traddr": "10.0.0.1", 00:33:03.480 "trsvcid": "59810", 00:33:03.480 "trtype": "TCP" 00:33:03.480 }, 00:33:03.480 "qid": 0, 00:33:03.480 "state": "enabled", 00:33:03.480 "thread": "nvmf_tgt_poll_group_000" 00:33:03.480 } 00:33:03.480 ]' 00:33:03.480 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:03.739 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:33:03.739 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:03.739 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:03.739 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:03.739 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:03.739 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:03.739 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:03.999 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:33:03.999 05:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:33:04.934 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:04.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:04.934 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:04.934 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.934 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:04.934 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.934 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:33:04.934 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:33:04.934 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:04.934 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:04.934 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:05.193 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:33:05.193 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:05.193 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:05.193 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:33:05.193 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:33:05.193 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:05.193 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:05.193 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.193 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:05.193 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.193 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:05.193 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:05.193 05:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:05.451 00:33:05.451 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:05.451 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:05.451 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:05.710 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.710 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:05.710 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.710 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:05.710 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.710 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:05.710 { 00:33:05.710 "auth": { 00:33:05.710 "dhgroup": "null", 00:33:05.710 "digest": "sha512", 00:33:05.710 "state": "completed" 00:33:05.710 }, 00:33:05.710 "cntlid": 97, 00:33:05.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:05.710 "listen_address": { 00:33:05.710 "adrfam": "IPv4", 00:33:05.710 "traddr": "10.0.0.3", 00:33:05.710 "trsvcid": "4420", 00:33:05.710 "trtype": "TCP" 00:33:05.710 }, 00:33:05.710 "peer_address": { 00:33:05.710 "adrfam": "IPv4", 00:33:05.710 "traddr": "10.0.0.1", 00:33:05.710 "trsvcid": "59836", 00:33:05.710 "trtype": "TCP" 00:33:05.710 }, 00:33:05.710 "qid": 0, 00:33:05.710 "state": "enabled", 00:33:05.710 "thread": "nvmf_tgt_poll_group_000" 00:33:05.710 } 00:33:05.710 ]' 00:33:05.710 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:05.710 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:05.710 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:05.710 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:33:05.710 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:05.710 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:05.710 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:05.710 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:06.277 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:33:06.277 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:33:06.844 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:06.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:06.844 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:06.844 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.844 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:06.844 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.844 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:06.844 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:06.844 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:07.411 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:33:07.411 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:07.411 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:07.411 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:33:07.411 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:33:07.411 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:07.411 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:07.411 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.411 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:07.411 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.411 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:07.411 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:07.411 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:07.670 00:33:07.670 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:07.670 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:07.670 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:07.928 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.928 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:07.928 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.928 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:07.928 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.928 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:07.928 { 00:33:07.928 "auth": { 00:33:07.928 "dhgroup": "null", 00:33:07.928 "digest": "sha512", 00:33:07.928 "state": "completed" 00:33:07.928 }, 00:33:07.928 "cntlid": 99, 00:33:07.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:07.928 "listen_address": { 00:33:07.928 "adrfam": "IPv4", 00:33:07.928 "traddr": "10.0.0.3", 00:33:07.928 "trsvcid": "4420", 00:33:07.928 "trtype": "TCP" 00:33:07.928 }, 00:33:07.928 "peer_address": { 00:33:07.928 "adrfam": "IPv4", 00:33:07.928 "traddr": "10.0.0.1", 00:33:07.928 "trsvcid": "58554", 00:33:07.928 "trtype": "TCP" 00:33:07.928 }, 00:33:07.928 "qid": 0, 00:33:07.928 "state": "enabled", 00:33:07.928 "thread": "nvmf_tgt_poll_group_000" 00:33:07.928 } 00:33:07.928 ]' 00:33:07.928 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:07.928 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:07.928 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:07.928 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:33:07.928 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:08.186 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:08.186 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:08.186 05:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:08.445 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:33:08.445 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:33:09.380 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:09.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:09.381 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:09.381 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.381 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:09.381 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.381 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:09.381 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:09.381 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:09.381 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:33:09.381 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:09.381 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:09.381 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:33:09.381 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:33:09.381 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:09.381 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.381 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.639 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:09.639 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.639 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.639 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.639 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.898 00:33:09.898 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:09.898 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:09.898 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:10.156 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.156 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:10.156 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.156 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:10.156 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.156 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:10.156 { 00:33:10.156 "auth": { 00:33:10.156 "dhgroup": "null", 00:33:10.156 "digest": "sha512", 00:33:10.156 "state": "completed" 00:33:10.156 }, 00:33:10.156 "cntlid": 101, 00:33:10.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:10.156 "listen_address": { 00:33:10.156 "adrfam": "IPv4", 00:33:10.156 "traddr": "10.0.0.3", 00:33:10.156 "trsvcid": "4420", 00:33:10.156 "trtype": "TCP" 00:33:10.156 }, 00:33:10.156 "peer_address": { 00:33:10.156 "adrfam": "IPv4", 00:33:10.156 "traddr": "10.0.0.1", 00:33:10.156 "trsvcid": "58566", 00:33:10.156 "trtype": "TCP" 00:33:10.156 }, 00:33:10.156 "qid": 0, 00:33:10.156 "state": "enabled", 00:33:10.156 "thread": "nvmf_tgt_poll_group_000" 00:33:10.156 } 00:33:10.156 ]' 00:33:10.156 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:10.156 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:10.156 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:10.156 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:33:10.156 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:10.156 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:10.156 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:10.156 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:10.723 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:33:10.723 05:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:33:11.290 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:11.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:11.290 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:11.290 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.290 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:11.290 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.290 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:11.290 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:11.290 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:33:11.549 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:33:11.549 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:11.549 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:11.549 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:33:11.549 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:33:11.549 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:11.549 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:33:11.549 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.549 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:11.549 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.549 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:33:11.549 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:11.549 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:11.808 00:33:12.066 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:12.066 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:12.066 05:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:12.325 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.325 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:12.325 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.325 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:12.325 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.325 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:12.325 { 00:33:12.325 "auth": { 00:33:12.325 "dhgroup": "null", 00:33:12.325 "digest": "sha512", 00:33:12.325 "state": "completed" 00:33:12.325 }, 00:33:12.325 "cntlid": 103, 00:33:12.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:12.325 "listen_address": { 00:33:12.325 "adrfam": "IPv4", 00:33:12.325 "traddr": "10.0.0.3", 00:33:12.325 "trsvcid": "4420", 00:33:12.325 "trtype": "TCP" 00:33:12.325 }, 00:33:12.325 "peer_address": { 00:33:12.325 "adrfam": "IPv4", 00:33:12.325 "traddr": "10.0.0.1", 00:33:12.325 "trsvcid": "58602", 00:33:12.325 "trtype": "TCP" 00:33:12.325 }, 00:33:12.325 "qid": 0, 00:33:12.325 "state": "enabled", 00:33:12.325 "thread": "nvmf_tgt_poll_group_000" 00:33:12.325 } 00:33:12.325 ]' 00:33:12.325 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:12.325 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:12.325 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:12.325 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:33:12.325 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:12.325 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:12.325 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:12.325 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:12.584 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:33:12.584 05:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:33:13.152 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:13.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:13.152 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:13.152 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.152 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:13.152 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.152 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:33:13.152 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:13.152 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:13.152 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:13.720 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:33:13.720 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:13.720 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:13.720 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:33:13.720 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:33:13.720 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:13.720 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.720 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.720 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:13.720 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.720 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.720 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.720 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.980 00:33:13.980 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:13.980 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:13.980 05:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:14.239 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.239 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:14.239 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.239 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:14.239 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.239 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:14.239 { 00:33:14.239 "auth": { 00:33:14.239 "dhgroup": "ffdhe2048", 00:33:14.239 "digest": "sha512", 00:33:14.239 "state": "completed" 00:33:14.239 }, 00:33:14.239 "cntlid": 105, 00:33:14.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:14.240 "listen_address": { 00:33:14.240 "adrfam": "IPv4", 00:33:14.240 "traddr": "10.0.0.3", 00:33:14.240 "trsvcid": "4420", 00:33:14.240 "trtype": "TCP" 00:33:14.240 }, 00:33:14.240 "peer_address": { 00:33:14.240 "adrfam": "IPv4", 00:33:14.240 "traddr": "10.0.0.1", 00:33:14.240 "trsvcid": "58620", 00:33:14.240 "trtype": "TCP" 00:33:14.240 }, 00:33:14.240 "qid": 0, 00:33:14.240 "state": "enabled", 00:33:14.240 "thread": "nvmf_tgt_poll_group_000" 00:33:14.240 } 00:33:14.240 ]' 00:33:14.240 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:14.240 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:14.240 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:14.240 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:14.240 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:14.499 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:14.499 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:14.499 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:14.758 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:33:14.758 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:33:15.327 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:15.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:15.327 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:15.327 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.327 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:15.327 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.327 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:15.327 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:15.327 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:15.585 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:33:15.585 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:15.585 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:15.585 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:33:15.585 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:33:15.585 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:15.585 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:15.585 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.585 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:15.585 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.585 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:15.586 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:15.586 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:16.154 00:33:16.154 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:16.155 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:16.155 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:16.420 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.420 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:16.420 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.420 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:16.420 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.420 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:16.420 { 00:33:16.420 "auth": { 00:33:16.420 "dhgroup": "ffdhe2048", 00:33:16.420 "digest": "sha512", 00:33:16.420 "state": "completed" 00:33:16.420 }, 00:33:16.420 "cntlid": 107, 00:33:16.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:16.420 "listen_address": { 00:33:16.420 "adrfam": "IPv4", 00:33:16.420 "traddr": "10.0.0.3", 00:33:16.420 "trsvcid": "4420", 00:33:16.420 "trtype": "TCP" 00:33:16.420 }, 00:33:16.420 "peer_address": { 00:33:16.420 "adrfam": "IPv4", 00:33:16.420 "traddr": "10.0.0.1", 00:33:16.420 "trsvcid": "58660", 00:33:16.420 "trtype": "TCP" 00:33:16.420 }, 00:33:16.420 "qid": 0, 00:33:16.420 "state": "enabled", 00:33:16.420 "thread": "nvmf_tgt_poll_group_000" 00:33:16.420 } 00:33:16.420 ]' 00:33:16.420 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:16.420 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:16.420 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:16.420 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:16.420 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:16.420 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:16.420 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:16.420 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:16.678 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:33:16.678 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:33:17.246 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:17.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:17.246 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:17.246 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.246 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:17.246 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.246 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:17.246 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:17.246 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:17.505 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:33:17.505 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:17.505 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:17.505 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:33:17.505 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:33:17.505 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:17.505 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:17.505 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.505 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:17.505 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.505 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:17.505 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:17.505 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:17.764 00:33:17.764 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:17.764 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:17.765 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:18.024 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.024 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:18.024 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.024 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:18.024 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.024 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:18.024 { 00:33:18.024 "auth": { 00:33:18.024 "dhgroup": "ffdhe2048", 00:33:18.024 "digest": "sha512", 00:33:18.024 "state": "completed" 00:33:18.024 }, 00:33:18.024 "cntlid": 109, 00:33:18.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:18.024 "listen_address": { 00:33:18.024 "adrfam": "IPv4", 00:33:18.024 "traddr": "10.0.0.3", 00:33:18.024 "trsvcid": "4420", 00:33:18.024 "trtype": "TCP" 00:33:18.024 }, 00:33:18.024 "peer_address": { 00:33:18.024 "adrfam": "IPv4", 00:33:18.025 "traddr": "10.0.0.1", 00:33:18.025 "trsvcid": "43034", 00:33:18.025 "trtype": "TCP" 00:33:18.025 }, 00:33:18.025 "qid": 0, 00:33:18.025 "state": "enabled", 00:33:18.025 "thread": "nvmf_tgt_poll_group_000" 00:33:18.025 } 00:33:18.025 ]' 00:33:18.025 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:18.025 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:18.025 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:18.025 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:18.025 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:18.284 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:18.284 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:18.284 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:18.284 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:33:18.284 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:33:19.223 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:19.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:19.223 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:19.223 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.223 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:19.223 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.223 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:19.223 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:19.223 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:19.223 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:33:19.223 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:19.223 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:19.223 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:33:19.223 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:33:19.223 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:19.223 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:33:19.223 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.224 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:19.224 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.224 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:33:19.224 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:19.224 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:19.482 00:33:19.482 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:19.482 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:19.482 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:20.050 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.050 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:20.050 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.050 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:20.050 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.050 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:20.050 { 00:33:20.050 "auth": { 00:33:20.050 "dhgroup": "ffdhe2048", 00:33:20.050 "digest": "sha512", 00:33:20.050 "state": "completed" 00:33:20.050 }, 00:33:20.050 "cntlid": 111, 00:33:20.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:20.051 "listen_address": { 00:33:20.051 "adrfam": "IPv4", 00:33:20.051 "traddr": "10.0.0.3", 00:33:20.051 "trsvcid": "4420", 00:33:20.051 "trtype": "TCP" 00:33:20.051 }, 00:33:20.051 "peer_address": { 00:33:20.051 "adrfam": "IPv4", 00:33:20.051 "traddr": "10.0.0.1", 00:33:20.051 "trsvcid": "43058", 00:33:20.051 "trtype": "TCP" 00:33:20.051 }, 00:33:20.051 "qid": 0, 00:33:20.051 "state": "enabled", 00:33:20.051 "thread": "nvmf_tgt_poll_group_000" 00:33:20.051 } 00:33:20.051 ]' 00:33:20.051 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:20.051 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:20.051 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:20.051 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:33:20.051 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:20.051 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:20.051 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:20.051 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:20.310 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:33:20.310 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:33:21.251 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:21.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:21.251 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:21.251 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.251 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:21.251 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.251 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:33:21.251 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:21.251 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:21.251 05:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:21.251 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:33:21.251 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:21.251 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:21.251 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:33:21.251 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:33:21.251 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:21.251 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.251 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.251 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:21.251 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.251 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.251 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.251 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:21.510 00:33:21.768 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:21.768 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:21.768 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:22.027 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.027 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:22.027 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.027 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:22.027 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.027 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:22.027 { 00:33:22.027 "auth": { 00:33:22.027 "dhgroup": "ffdhe3072", 00:33:22.027 "digest": "sha512", 00:33:22.027 "state": "completed" 00:33:22.027 }, 00:33:22.027 "cntlid": 113, 00:33:22.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:22.027 "listen_address": { 00:33:22.027 "adrfam": "IPv4", 00:33:22.027 "traddr": "10.0.0.3", 00:33:22.027 "trsvcid": "4420", 00:33:22.027 "trtype": "TCP" 00:33:22.027 }, 00:33:22.027 "peer_address": { 00:33:22.027 "adrfam": "IPv4", 00:33:22.027 "traddr": "10.0.0.1", 00:33:22.027 "trsvcid": "43074", 00:33:22.027 "trtype": "TCP" 00:33:22.027 }, 00:33:22.027 "qid": 0, 00:33:22.027 "state": "enabled", 00:33:22.027 "thread": "nvmf_tgt_poll_group_000" 00:33:22.027 } 00:33:22.027 ]' 00:33:22.027 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:22.027 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:22.027 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:22.027 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:22.027 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:22.027 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:22.027 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:22.027 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:22.304 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:33:22.304 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:33:22.936 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:22.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:22.936 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:22.936 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.936 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:22.936 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.936 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:22.936 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:22.936 05:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:23.506 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:33:23.506 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:23.506 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:23.506 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:33:23.506 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:33:23.506 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:23.506 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.506 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.506 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:23.506 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.506 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.506 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.506 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:23.765 00:33:23.765 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:23.765 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:23.765 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:24.025 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.025 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:24.025 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.025 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:24.025 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.025 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:24.025 { 00:33:24.025 "auth": { 00:33:24.025 "dhgroup": "ffdhe3072", 00:33:24.025 "digest": "sha512", 00:33:24.025 "state": "completed" 00:33:24.025 }, 00:33:24.025 "cntlid": 115, 00:33:24.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:24.025 "listen_address": { 00:33:24.025 "adrfam": "IPv4", 00:33:24.025 "traddr": "10.0.0.3", 00:33:24.025 "trsvcid": "4420", 00:33:24.025 "trtype": "TCP" 00:33:24.025 }, 00:33:24.025 "peer_address": { 00:33:24.025 "adrfam": "IPv4", 00:33:24.025 "traddr": "10.0.0.1", 00:33:24.025 "trsvcid": "43110", 00:33:24.025 "trtype": "TCP" 00:33:24.025 }, 00:33:24.025 "qid": 0, 00:33:24.025 "state": "enabled", 00:33:24.025 "thread": "nvmf_tgt_poll_group_000" 00:33:24.025 } 00:33:24.025 ]' 00:33:24.025 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:24.025 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:24.025 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:24.025 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:24.025 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:24.284 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:24.285 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:24.285 05:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:24.285 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:33:24.285 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:33:25.221 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:25.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:25.221 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:25.221 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.221 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:25.221 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.221 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:25.221 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:25.221 05:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:25.480 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:33:25.480 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:25.480 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:25.480 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:33:25.480 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:33:25.480 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:25.480 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:25.480 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.480 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:25.480 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.480 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:25.480 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:25.480 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:25.738 00:33:25.738 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:25.738 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:25.738 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:25.997 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:25.997 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:25.997 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.997 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:25.997 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.997 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:25.997 { 00:33:25.997 "auth": { 00:33:25.997 "dhgroup": "ffdhe3072", 00:33:25.997 "digest": "sha512", 00:33:25.997 "state": "completed" 00:33:25.997 }, 00:33:25.997 "cntlid": 117, 00:33:25.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:25.997 "listen_address": { 00:33:25.997 "adrfam": "IPv4", 00:33:25.997 "traddr": "10.0.0.3", 00:33:25.997 "trsvcid": "4420", 00:33:25.997 "trtype": "TCP" 00:33:25.997 }, 00:33:25.997 "peer_address": { 00:33:25.997 "adrfam": "IPv4", 00:33:25.997 "traddr": "10.0.0.1", 00:33:25.997 "trsvcid": "43140", 00:33:25.997 "trtype": "TCP" 00:33:25.997 }, 00:33:25.997 "qid": 0, 00:33:25.997 "state": "enabled", 00:33:25.997 "thread": "nvmf_tgt_poll_group_000" 00:33:25.997 } 00:33:25.997 ]' 00:33:25.997 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:25.997 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:25.997 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:25.997 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:25.997 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:26.255 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:26.255 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:26.255 05:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:26.514 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:33:26.514 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:33:27.080 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:27.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:27.080 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:27.080 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.080 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:27.080 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.080 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:27.080 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:27.080 05:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:27.338 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:33:27.338 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:27.338 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:27.338 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:33:27.338 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:33:27.338 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:27.338 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:33:27.338 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.338 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:27.338 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.338 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:33:27.338 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:27.338 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:27.907 00:33:27.907 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:27.907 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:27.907 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:28.165 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:28.165 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:28.165 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.165 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:28.165 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.165 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:28.165 { 00:33:28.165 "auth": { 00:33:28.165 "dhgroup": "ffdhe3072", 00:33:28.165 "digest": "sha512", 00:33:28.165 "state": "completed" 00:33:28.165 }, 00:33:28.166 "cntlid": 119, 00:33:28.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:28.166 "listen_address": { 00:33:28.166 "adrfam": "IPv4", 00:33:28.166 "traddr": "10.0.0.3", 00:33:28.166 "trsvcid": "4420", 00:33:28.166 "trtype": "TCP" 00:33:28.166 }, 00:33:28.166 "peer_address": { 00:33:28.166 "adrfam": "IPv4", 00:33:28.166 "traddr": "10.0.0.1", 00:33:28.166 "trsvcid": "41180", 00:33:28.166 "trtype": "TCP" 00:33:28.166 }, 00:33:28.166 "qid": 0, 00:33:28.166 "state": "enabled", 00:33:28.166 "thread": "nvmf_tgt_poll_group_000" 00:33:28.166 } 00:33:28.166 ]' 00:33:28.166 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:28.166 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:28.166 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:28.166 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:33:28.166 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:28.166 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:28.166 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:28.166 05:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:28.425 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:33:28.425 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:33:28.992 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:28.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:28.992 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:28.992 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.992 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:28.992 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.992 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:33:28.992 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:28.992 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:28.992 05:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:29.560 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:33:29.560 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:29.560 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:29.560 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:33:29.560 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:33:29.560 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:29.560 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:29.560 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.560 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:29.560 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.560 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:29.560 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:29.560 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:29.819 00:33:29.819 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:29.819 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:29.819 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:30.078 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:30.078 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:30.078 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.078 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:30.078 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.078 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:30.078 { 00:33:30.078 "auth": { 00:33:30.078 "dhgroup": "ffdhe4096", 00:33:30.078 "digest": "sha512", 00:33:30.078 "state": "completed" 00:33:30.078 }, 00:33:30.078 "cntlid": 121, 00:33:30.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:30.078 "listen_address": { 00:33:30.078 "adrfam": "IPv4", 00:33:30.078 "traddr": "10.0.0.3", 00:33:30.078 "trsvcid": "4420", 00:33:30.078 "trtype": "TCP" 00:33:30.078 }, 00:33:30.078 "peer_address": { 00:33:30.078 "adrfam": "IPv4", 00:33:30.078 "traddr": "10.0.0.1", 00:33:30.078 "trsvcid": "41204", 00:33:30.078 "trtype": "TCP" 00:33:30.078 }, 00:33:30.078 "qid": 0, 00:33:30.078 "state": "enabled", 00:33:30.078 "thread": "nvmf_tgt_poll_group_000" 00:33:30.078 } 00:33:30.078 ]' 00:33:30.078 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:30.079 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:30.079 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:30.079 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:30.079 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:30.337 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:30.337 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:30.337 05:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:30.597 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:33:30.597 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:33:31.165 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:31.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:31.165 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:31.165 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.165 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:31.165 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.165 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:31.165 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:31.165 05:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:31.423 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:33:31.423 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:31.423 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:31.423 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:33:31.423 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:33:31.423 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:31.423 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:31.423 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.423 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:31.423 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.423 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:31.423 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:31.423 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:31.989 00:33:31.989 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:31.989 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:31.989 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:32.247 05:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:32.247 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:32.247 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.247 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:32.247 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.247 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:32.247 { 00:33:32.247 "auth": { 00:33:32.247 "dhgroup": "ffdhe4096", 00:33:32.247 "digest": "sha512", 00:33:32.247 "state": "completed" 00:33:32.247 }, 00:33:32.247 "cntlid": 123, 00:33:32.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:32.247 "listen_address": { 00:33:32.247 "adrfam": "IPv4", 00:33:32.247 "traddr": "10.0.0.3", 00:33:32.247 "trsvcid": "4420", 00:33:32.247 "trtype": "TCP" 00:33:32.247 }, 00:33:32.247 "peer_address": { 00:33:32.247 "adrfam": "IPv4", 00:33:32.247 "traddr": "10.0.0.1", 00:33:32.247 "trsvcid": "41234", 00:33:32.247 "trtype": "TCP" 00:33:32.247 }, 00:33:32.247 "qid": 0, 00:33:32.247 "state": "enabled", 00:33:32.247 "thread": "nvmf_tgt_poll_group_000" 00:33:32.247 } 00:33:32.247 ]' 00:33:32.247 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:32.247 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:32.247 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:32.247 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:32.247 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:32.247 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:32.247 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:32.247 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:32.505 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:33:32.505 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:33:33.134 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:33.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:33.134 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:33.134 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.134 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:33.135 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.135 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:33.135 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:33.135 05:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:33.418 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:33:33.418 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:33.418 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:33.418 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:33:33.418 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:33:33.418 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:33.418 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:33.418 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.418 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:33.418 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.418 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:33.418 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:33.418 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:33.677 00:33:33.677 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:33.677 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:33.677 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:33.936 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.936 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:33.936 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.936 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:34.194 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.194 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:34.194 { 00:33:34.194 "auth": { 00:33:34.194 "dhgroup": "ffdhe4096", 00:33:34.194 "digest": "sha512", 00:33:34.194 "state": "completed" 00:33:34.194 }, 00:33:34.194 "cntlid": 125, 00:33:34.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:34.194 "listen_address": { 00:33:34.194 "adrfam": "IPv4", 00:33:34.194 "traddr": "10.0.0.3", 00:33:34.194 "trsvcid": "4420", 00:33:34.194 "trtype": "TCP" 00:33:34.194 }, 00:33:34.194 "peer_address": { 00:33:34.194 "adrfam": "IPv4", 00:33:34.194 "traddr": "10.0.0.1", 00:33:34.194 "trsvcid": "41264", 00:33:34.194 "trtype": "TCP" 00:33:34.194 }, 00:33:34.194 "qid": 0, 00:33:34.194 "state": "enabled", 00:33:34.194 "thread": "nvmf_tgt_poll_group_000" 00:33:34.194 } 00:33:34.194 ]' 00:33:34.194 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:34.194 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:34.194 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:34.194 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:34.194 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:34.194 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:34.194 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:34.194 05:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:34.451 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:33:34.451 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:33:35.016 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:35.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:35.016 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:35.016 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.016 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:35.016 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.016 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:35.016 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:35.016 05:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:35.274 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:33:35.274 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:35.274 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:35.274 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:33:35.274 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:33:35.274 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:35.274 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:33:35.274 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.274 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:35.274 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.274 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:33:35.274 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:35.274 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:35.841 00:33:35.841 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:35.841 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:35.841 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:35.841 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:35.841 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:35.841 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:35.841 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:35.841 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.841 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:35.841 { 00:33:35.841 "auth": { 00:33:35.841 "dhgroup": "ffdhe4096", 00:33:35.841 "digest": "sha512", 00:33:35.841 "state": "completed" 00:33:35.841 }, 00:33:35.841 "cntlid": 127, 00:33:35.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:35.841 "listen_address": { 00:33:35.841 "adrfam": "IPv4", 00:33:35.841 "traddr": "10.0.0.3", 00:33:35.841 "trsvcid": "4420", 00:33:35.841 "trtype": "TCP" 00:33:35.841 }, 00:33:35.841 "peer_address": { 00:33:35.841 "adrfam": "IPv4", 00:33:35.841 "traddr": "10.0.0.1", 00:33:35.841 "trsvcid": "41290", 00:33:35.841 "trtype": "TCP" 00:33:35.841 }, 00:33:35.841 "qid": 0, 00:33:35.841 "state": "enabled", 00:33:35.841 "thread": "nvmf_tgt_poll_group_000" 00:33:35.841 } 00:33:35.841 ]' 00:33:35.841 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:36.099 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:36.099 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:36.099 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:33:36.099 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:36.099 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:36.099 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:36.099 05:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:36.358 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:33:36.358 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:33:36.925 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:36.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:36.925 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:36.925 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.925 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:36.925 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.925 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:33:36.925 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:36.925 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:36.925 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:37.182 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:33:37.182 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:37.182 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:37.182 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:33:37.182 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:33:37.182 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:37.182 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:37.182 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.182 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:37.182 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.182 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:37.182 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:37.182 05:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:37.440 00:33:37.440 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:37.440 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:37.440 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:38.006 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:38.006 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:38.006 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.006 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:38.006 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.006 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:38.006 { 00:33:38.006 "auth": { 00:33:38.006 "dhgroup": "ffdhe6144", 00:33:38.006 "digest": "sha512", 00:33:38.006 "state": "completed" 00:33:38.006 }, 00:33:38.006 "cntlid": 129, 00:33:38.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:38.006 "listen_address": { 00:33:38.006 "adrfam": "IPv4", 00:33:38.006 "traddr": "10.0.0.3", 00:33:38.006 "trsvcid": "4420", 00:33:38.006 "trtype": "TCP" 00:33:38.006 }, 00:33:38.006 "peer_address": { 00:33:38.006 "adrfam": "IPv4", 00:33:38.006 "traddr": "10.0.0.1", 00:33:38.006 "trsvcid": "53878", 00:33:38.006 "trtype": "TCP" 00:33:38.006 }, 00:33:38.006 "qid": 0, 00:33:38.006 "state": "enabled", 00:33:38.006 "thread": "nvmf_tgt_poll_group_000" 00:33:38.006 } 00:33:38.006 ]' 00:33:38.006 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:38.006 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:38.006 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:38.006 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:38.006 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:38.006 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:38.006 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:38.006 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:38.264 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:33:38.264 05:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:38.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:38.830 05:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:39.396 00:33:39.396 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:39.396 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:39.396 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:39.654 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:39.654 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:39.654 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.654 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:39.654 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.654 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:39.654 { 00:33:39.654 "auth": { 00:33:39.654 "dhgroup": "ffdhe6144", 00:33:39.654 "digest": "sha512", 00:33:39.654 "state": "completed" 00:33:39.654 }, 00:33:39.654 "cntlid": 131, 00:33:39.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:39.654 "listen_address": { 00:33:39.654 "adrfam": "IPv4", 00:33:39.654 "traddr": "10.0.0.3", 00:33:39.654 "trsvcid": "4420", 00:33:39.654 "trtype": "TCP" 00:33:39.654 }, 00:33:39.654 "peer_address": { 00:33:39.654 "adrfam": "IPv4", 00:33:39.654 "traddr": "10.0.0.1", 00:33:39.654 "trsvcid": "53888", 00:33:39.654 "trtype": "TCP" 00:33:39.654 }, 00:33:39.654 "qid": 0, 00:33:39.654 "state": "enabled", 00:33:39.654 "thread": "nvmf_tgt_poll_group_000" 00:33:39.654 } 00:33:39.654 ]' 00:33:39.654 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:39.913 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:39.913 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:39.913 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:39.913 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:39.913 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:39.913 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:39.913 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:40.172 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:33:40.172 05:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:33:41.107 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:41.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:41.108 05:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:41.675 00:33:41.675 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:41.675 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:41.675 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:41.933 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:41.933 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:41.933 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.933 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:41.933 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.933 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:41.933 { 00:33:41.933 "auth": { 00:33:41.933 "dhgroup": "ffdhe6144", 00:33:41.933 "digest": "sha512", 00:33:41.933 "state": "completed" 00:33:41.933 }, 00:33:41.933 "cntlid": 133, 00:33:41.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:41.933 "listen_address": { 00:33:41.933 "adrfam": "IPv4", 00:33:41.933 "traddr": "10.0.0.3", 00:33:41.933 "trsvcid": "4420", 00:33:41.933 "trtype": "TCP" 00:33:41.933 }, 00:33:41.933 "peer_address": { 00:33:41.933 "adrfam": "IPv4", 00:33:41.933 "traddr": "10.0.0.1", 00:33:41.933 "trsvcid": "53920", 00:33:41.933 "trtype": "TCP" 00:33:41.933 }, 00:33:41.933 "qid": 0, 00:33:41.933 "state": "enabled", 00:33:41.933 "thread": "nvmf_tgt_poll_group_000" 00:33:41.933 } 00:33:41.933 ]' 00:33:41.933 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:41.933 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:41.933 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:42.191 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:42.191 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:42.191 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:42.191 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:42.191 05:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:42.450 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:33:42.450 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:33:43.018 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:43.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:43.018 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:43.018 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.018 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:43.018 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.018 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:43.018 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:43.018 05:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:43.585 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:33:43.585 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:43.585 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:43.585 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:33:43.585 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:33:43.585 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:43.585 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:33:43.585 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.585 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:43.585 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.585 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:33:43.585 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:43.585 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:43.844 00:33:43.844 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:43.844 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:43.844 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:44.104 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:44.104 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:44.104 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.104 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:44.104 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.104 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:44.104 { 00:33:44.104 "auth": { 00:33:44.104 "dhgroup": "ffdhe6144", 00:33:44.104 "digest": "sha512", 00:33:44.104 "state": "completed" 00:33:44.104 }, 00:33:44.104 "cntlid": 135, 00:33:44.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:44.104 "listen_address": { 00:33:44.104 "adrfam": "IPv4", 00:33:44.104 "traddr": "10.0.0.3", 00:33:44.104 "trsvcid": "4420", 00:33:44.104 "trtype": "TCP" 00:33:44.104 }, 00:33:44.104 "peer_address": { 00:33:44.104 "adrfam": "IPv4", 00:33:44.104 "traddr": "10.0.0.1", 00:33:44.104 "trsvcid": "53942", 00:33:44.104 "trtype": "TCP" 00:33:44.104 }, 00:33:44.104 "qid": 0, 00:33:44.104 "state": "enabled", 00:33:44.104 "thread": "nvmf_tgt_poll_group_000" 00:33:44.104 } 00:33:44.104 ]' 00:33:44.104 05:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:44.397 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:44.397 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:44.397 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:33:44.397 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:44.397 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:44.397 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:44.397 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:44.676 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:33:44.676 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:33:45.245 05:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:45.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:45.245 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:45.245 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.245 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:45.245 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.245 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:33:45.245 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:45.245 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:45.245 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:45.504 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:33:45.504 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:45.504 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:45.504 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:33:45.504 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:33:45.504 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:45.504 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:45.504 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.504 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:45.504 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.504 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:45.504 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:45.504 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:46.072 00:33:46.072 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:46.072 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:46.072 05:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:46.331 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:46.331 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:46.331 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.331 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:46.331 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.331 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:46.331 { 00:33:46.331 "auth": { 00:33:46.331 "dhgroup": "ffdhe8192", 00:33:46.331 "digest": "sha512", 00:33:46.331 "state": "completed" 00:33:46.331 }, 00:33:46.331 "cntlid": 137, 00:33:46.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:46.331 "listen_address": { 00:33:46.331 "adrfam": "IPv4", 00:33:46.331 "traddr": "10.0.0.3", 00:33:46.331 "trsvcid": "4420", 00:33:46.331 "trtype": "TCP" 00:33:46.331 }, 00:33:46.331 "peer_address": { 00:33:46.331 "adrfam": "IPv4", 00:33:46.331 "traddr": "10.0.0.1", 00:33:46.331 "trsvcid": "53984", 00:33:46.331 "trtype": "TCP" 00:33:46.331 }, 00:33:46.331 "qid": 0, 00:33:46.331 "state": "enabled", 00:33:46.331 "thread": "nvmf_tgt_poll_group_000" 00:33:46.331 } 00:33:46.331 ]' 00:33:46.590 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:46.590 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:46.590 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:46.590 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:46.590 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:46.590 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:46.590 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:46.590 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:46.848 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:33:46.848 05:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:33:47.784 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:47.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:47.784 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:47.784 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.784 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.784 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.785 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:47.785 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:47.785 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:48.044 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:33:48.044 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:48.044 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:48.044 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:33:48.044 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:33:48.044 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:48.044 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:48.044 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.044 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:48.044 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.044 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:48.044 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:48.044 05:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:48.611 00:33:48.611 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:48.611 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:48.611 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:49.178 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:49.178 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:49.178 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.178 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:49.178 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.178 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:49.178 { 00:33:49.178 "auth": { 00:33:49.178 "dhgroup": "ffdhe8192", 00:33:49.178 "digest": "sha512", 00:33:49.178 "state": "completed" 00:33:49.178 }, 00:33:49.178 "cntlid": 139, 00:33:49.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:49.178 "listen_address": { 00:33:49.178 "adrfam": "IPv4", 00:33:49.178 "traddr": "10.0.0.3", 00:33:49.178 "trsvcid": "4420", 00:33:49.178 "trtype": "TCP" 00:33:49.178 }, 00:33:49.178 "peer_address": { 00:33:49.178 "adrfam": "IPv4", 00:33:49.178 "traddr": "10.0.0.1", 00:33:49.178 "trsvcid": "50332", 00:33:49.178 "trtype": "TCP" 00:33:49.178 }, 00:33:49.178 "qid": 0, 00:33:49.178 "state": "enabled", 00:33:49.178 "thread": "nvmf_tgt_poll_group_000" 00:33:49.178 } 00:33:49.178 ]' 00:33:49.178 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:49.178 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:49.178 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:49.178 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:49.178 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:49.178 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:49.178 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:49.178 05:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:49.442 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:33:49.442 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: --dhchap-ctrl-secret DHHC-1:02:ZWNiMGYyYmE1ZjkyOTcyNTNlNTY4MzZmZGQ5NDJmN2Y1NmY1NmJjMTIzYmU2MGQ2ulIQNw==: 00:33:50.010 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:50.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:50.010 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:50.010 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.010 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:50.010 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.010 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:50.010 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:50.010 05:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:50.577 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:33:50.577 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:50.577 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:50.577 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:33:50.577 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:33:50.577 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:50.577 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:50.577 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:50.577 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:50.577 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:50.577 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:50.577 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:50.577 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:51.143 00:33:51.143 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:51.143 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:51.143 05:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:51.401 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:51.401 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:51.401 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:51.401 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:51.401 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:51.401 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:51.401 { 00:33:51.401 "auth": { 00:33:51.401 "dhgroup": "ffdhe8192", 00:33:51.401 "digest": "sha512", 00:33:51.402 "state": "completed" 00:33:51.402 }, 00:33:51.402 "cntlid": 141, 00:33:51.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:51.402 "listen_address": { 00:33:51.402 "adrfam": "IPv4", 00:33:51.402 "traddr": "10.0.0.3", 00:33:51.402 "trsvcid": "4420", 00:33:51.402 "trtype": "TCP" 00:33:51.402 }, 00:33:51.402 "peer_address": { 00:33:51.402 "adrfam": "IPv4", 00:33:51.402 "traddr": "10.0.0.1", 00:33:51.402 "trsvcid": "50366", 00:33:51.402 "trtype": "TCP" 00:33:51.402 }, 00:33:51.402 "qid": 0, 00:33:51.402 "state": "enabled", 00:33:51.402 "thread": "nvmf_tgt_poll_group_000" 00:33:51.402 } 00:33:51.402 ]' 00:33:51.402 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:51.402 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:51.402 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:51.402 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:51.402 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:51.402 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:51.402 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:51.402 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:51.660 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:33:51.660 05:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:01:ZmYyNzVjYzJlNzZkNzVhMzNjZDRhNWE2MGI2YWY2OWSHQ7js: 00:33:52.227 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:52.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:52.227 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:52.227 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.227 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:52.227 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.227 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:33:52.227 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:52.227 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:52.486 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:33:52.486 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:52.486 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:52.486 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:33:52.486 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:33:52.486 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:52.486 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:33:52.486 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.486 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:52.486 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.486 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:33:52.486 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:52.486 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:53.052 00:33:53.309 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:53.309 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:53.309 05:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:53.567 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:53.567 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:53.567 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.567 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:53.567 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.567 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:53.567 { 00:33:53.567 "auth": { 00:33:53.567 "dhgroup": "ffdhe8192", 00:33:53.567 "digest": "sha512", 00:33:53.567 "state": "completed" 00:33:53.567 }, 00:33:53.567 "cntlid": 143, 00:33:53.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:53.567 "listen_address": { 00:33:53.567 "adrfam": "IPv4", 00:33:53.567 "traddr": "10.0.0.3", 00:33:53.567 "trsvcid": "4420", 00:33:53.567 "trtype": "TCP" 00:33:53.567 }, 00:33:53.567 "peer_address": { 00:33:53.567 "adrfam": "IPv4", 00:33:53.567 "traddr": "10.0.0.1", 00:33:53.567 "trsvcid": "50398", 00:33:53.567 "trtype": "TCP" 00:33:53.567 }, 00:33:53.567 "qid": 0, 00:33:53.567 "state": "enabled", 00:33:53.567 "thread": "nvmf_tgt_poll_group_000" 00:33:53.567 } 00:33:53.567 ]' 00:33:53.567 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:53.567 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:53.568 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:53.568 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:53.568 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:53.568 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:53.568 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:53.568 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:53.826 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:33:53.826 05:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:33:54.393 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:54.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:54.393 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:54.393 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.393 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:54.394 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.394 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:33:54.394 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:33:54.394 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:33:54.394 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:54.652 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:54.652 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:33:54.911 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:33:54.911 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:54.911 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:54.911 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:33:54.911 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:33:54.911 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:54.911 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:54.911 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.911 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:54.911 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.911 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:54.911 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:54.911 05:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:55.479 00:33:55.479 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:33:55.479 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:33:55.479 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:33:55.737 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:55.737 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:33:55.737 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.737 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.737 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.737 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:33:55.737 { 00:33:55.737 "auth": { 00:33:55.737 "dhgroup": "ffdhe8192", 00:33:55.737 "digest": "sha512", 00:33:55.737 "state": "completed" 00:33:55.737 }, 00:33:55.737 "cntlid": 145, 00:33:55.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:55.737 "listen_address": { 00:33:55.737 "adrfam": "IPv4", 00:33:55.737 "traddr": "10.0.0.3", 00:33:55.737 "trsvcid": "4420", 00:33:55.737 "trtype": "TCP" 00:33:55.737 }, 00:33:55.737 "peer_address": { 00:33:55.737 "adrfam": "IPv4", 00:33:55.737 "traddr": "10.0.0.1", 00:33:55.737 "trsvcid": "50420", 00:33:55.737 "trtype": "TCP" 00:33:55.737 }, 00:33:55.737 "qid": 0, 00:33:55.737 "state": "enabled", 00:33:55.737 "thread": "nvmf_tgt_poll_group_000" 00:33:55.737 } 00:33:55.737 ]' 00:33:55.737 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:33:55.737 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:33:55.737 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:33:55.996 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:33:55.996 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:33:55.996 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:33:55.996 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:33:55.996 05:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:33:56.254 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:33:56.255 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:00:YTQyODdlNTliMGRmYmIyOTJlNjAwMDE0ZmUwMGUxMTIwZTQxYzhjODE4NTVhMmE28EaDIA==: --dhchap-ctrl-secret DHHC-1:03:N2IyOTg2MDk0MWQ2ZGNiOTY0YWVjZGI1NzM5N2I4YmI4MzRiYTI3ZTNjNGU3ZmMyMWJlODIxNzVhNmY2Yzk5Ybvh7SE=: 00:33:56.821 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:33:56.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:33:56.821 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:56.821 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.821 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:56.821 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.821 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 00:33:56.821 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.821 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:57.079 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.079 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:33:57.079 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:33:57.079 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:33:57.079 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:33:57.079 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:57.079 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:33:57.079 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:57.079 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:33:57.079 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:33:57.079 05:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:33:57.657 2024/11/20 05:45:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:33:57.657 request: 00:33:57.657 { 00:33:57.657 "method": "bdev_nvme_attach_controller", 00:33:57.657 "params": { 00:33:57.657 "name": "nvme0", 00:33:57.657 "trtype": "tcp", 00:33:57.657 "traddr": "10.0.0.3", 00:33:57.657 "adrfam": "ipv4", 00:33:57.657 "trsvcid": "4420", 00:33:57.657 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:33:57.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:57.657 "prchk_reftag": false, 00:33:57.657 "prchk_guard": false, 00:33:57.657 "hdgst": false, 00:33:57.657 "ddgst": false, 00:33:57.657 "dhchap_key": "key2", 00:33:57.657 "allow_unrecognized_csi": false 00:33:57.657 } 00:33:57.657 } 00:33:57.657 Got JSON-RPC error response 00:33:57.657 GoRPCClient: error on JSON-RPC call 00:33:57.657 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:33:57.657 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:57.657 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:57.658 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:58.225 2024/11/20 05:45:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:33:58.225 request: 00:33:58.225 { 00:33:58.225 "method": "bdev_nvme_attach_controller", 00:33:58.225 "params": { 00:33:58.225 "name": "nvme0", 00:33:58.225 "trtype": "tcp", 00:33:58.225 "traddr": "10.0.0.3", 00:33:58.225 "adrfam": "ipv4", 00:33:58.225 "trsvcid": "4420", 00:33:58.225 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:33:58.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:58.225 "prchk_reftag": false, 00:33:58.225 "prchk_guard": false, 00:33:58.225 "hdgst": false, 00:33:58.225 "ddgst": false, 00:33:58.225 "dhchap_key": "key1", 00:33:58.225 "dhchap_ctrlr_key": "ckey2", 00:33:58.225 "allow_unrecognized_csi": false 00:33:58.225 } 00:33:58.225 } 00:33:58.225 Got JSON-RPC error response 00:33:58.225 GoRPCClient: error on JSON-RPC call 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:58.225 05:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:58.793 2024/11/20 05:45:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:33:58.793 request: 00:33:58.793 { 00:33:58.793 "method": "bdev_nvme_attach_controller", 00:33:58.793 "params": { 00:33:58.793 "name": "nvme0", 00:33:58.793 "trtype": "tcp", 00:33:58.793 "traddr": "10.0.0.3", 00:33:58.793 "adrfam": "ipv4", 00:33:58.793 "trsvcid": "4420", 00:33:58.793 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:33:58.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:33:58.793 "prchk_reftag": false, 00:33:58.793 "prchk_guard": false, 00:33:58.793 "hdgst": false, 00:33:58.793 "ddgst": false, 00:33:58.793 "dhchap_key": "key1", 00:33:58.793 "dhchap_ctrlr_key": "ckey1", 00:33:58.793 "allow_unrecognized_csi": false 00:33:58.793 } 00:33:58.793 } 00:33:58.793 Got JSON-RPC error response 00:33:58.793 GoRPCClient: error on JSON-RPC call 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 76816 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 76816 ']' 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 76816 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76816 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:58.793 killing process with pid 76816 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76816' 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 76816 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 76816 00:33:58.793 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:33:59.052 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:59.052 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:59.052 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.052 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=81698 00:33:59.052 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 81698 00:33:59.052 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 81698 ']' 00:33:59.052 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.052 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:59.052 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:33:59.052 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.052 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:59.052 05:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.310 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:59.310 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:33:59.310 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:59.310 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:59.310 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.310 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:59.310 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:33:59.310 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 81698 00:33:59.310 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 81698 ']' 00:33:59.310 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.310 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:59.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.310 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.310 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:59.310 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.569 null0 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3Cd 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Tkh ]] 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Tkh 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Uz3 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.w4v ]] 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.w4v 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.UsH 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.uFb ]] 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uFb 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8UX 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:33:59.569 05:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:00.504 nvme0n1 00:34:00.504 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:34:00.504 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:00.504 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:34:00.763 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:00.763 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:34:00.763 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.763 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:01.023 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.023 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:34:01.023 { 00:34:01.023 "auth": { 00:34:01.023 "dhgroup": "ffdhe8192", 00:34:01.023 "digest": "sha512", 00:34:01.023 "state": "completed" 00:34:01.023 }, 00:34:01.023 "cntlid": 1, 00:34:01.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:34:01.023 "listen_address": { 00:34:01.023 "adrfam": "IPv4", 00:34:01.023 "traddr": "10.0.0.3", 00:34:01.023 "trsvcid": "4420", 00:34:01.023 "trtype": "TCP" 00:34:01.023 }, 00:34:01.023 "peer_address": { 00:34:01.023 "adrfam": "IPv4", 00:34:01.023 "traddr": "10.0.0.1", 00:34:01.023 "trsvcid": "35068", 00:34:01.023 "trtype": "TCP" 00:34:01.023 }, 00:34:01.023 "qid": 0, 00:34:01.023 "state": "enabled", 00:34:01.023 "thread": "nvmf_tgt_poll_group_000" 00:34:01.023 } 00:34:01.023 ]' 00:34:01.023 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:34:01.023 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:34:01.023 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:34:01.023 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:34:01.023 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:34:01.023 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:34:01.023 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:01.023 05:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:01.281 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:34:01.281 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:34:01.848 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:02.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:02.106 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:34:02.106 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.106 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:02.106 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.106 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key3 00:34:02.106 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.106 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:02.106 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.106 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:34:02.106 05:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:34:02.365 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:34:02.365 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:34:02.365 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:34:02.365 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:34:02.365 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:02.365 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:34:02.365 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:02.365 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:34:02.365 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:02.365 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:02.624 2024/11/20 05:45:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:34:02.624 request: 00:34:02.624 { 00:34:02.624 "method": "bdev_nvme_attach_controller", 00:34:02.624 "params": { 00:34:02.624 "name": "nvme0", 00:34:02.624 "trtype": "tcp", 00:34:02.624 "traddr": "10.0.0.3", 00:34:02.624 "adrfam": "ipv4", 00:34:02.624 "trsvcid": "4420", 00:34:02.624 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:02.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:34:02.624 "prchk_reftag": false, 00:34:02.624 "prchk_guard": false, 00:34:02.624 "hdgst": false, 00:34:02.624 "ddgst": false, 00:34:02.624 "dhchap_key": "key3", 00:34:02.624 "allow_unrecognized_csi": false 00:34:02.624 } 00:34:02.624 } 00:34:02.624 Got JSON-RPC error response 00:34:02.624 GoRPCClient: error on JSON-RPC call 00:34:02.624 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:34:02.624 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:02.624 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:02.624 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:02.624 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:34:02.624 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:34:02.624 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:34:02.624 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:34:02.882 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:34:02.882 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:34:02.882 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:34:02.882 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:34:02.882 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:02.882 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:34:02.882 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:02.882 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:34:02.882 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:02.883 05:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:34:03.450 2024/11/20 05:45:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:34:03.450 request: 00:34:03.450 { 00:34:03.450 "method": "bdev_nvme_attach_controller", 00:34:03.450 "params": { 00:34:03.450 "name": "nvme0", 00:34:03.450 "trtype": "tcp", 00:34:03.450 "traddr": "10.0.0.3", 00:34:03.450 "adrfam": "ipv4", 00:34:03.450 "trsvcid": "4420", 00:34:03.450 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:03.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:34:03.450 "prchk_reftag": false, 00:34:03.450 "prchk_guard": false, 00:34:03.450 "hdgst": false, 00:34:03.450 "ddgst": false, 00:34:03.450 "dhchap_key": "key3", 00:34:03.450 "allow_unrecognized_csi": false 00:34:03.450 } 00:34:03.450 } 00:34:03.450 Got JSON-RPC error response 00:34:03.450 GoRPCClient: error on JSON-RPC call 00:34:03.450 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:34:03.450 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:03.450 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:03.450 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:03.450 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:34:03.450 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:34:03.450 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:34:03.450 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:03.450 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:03.450 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:34:03.708 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:34:04.276 2024/11/20 05:45:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:34:04.276 request: 00:34:04.276 { 00:34:04.276 "method": "bdev_nvme_attach_controller", 00:34:04.276 "params": { 00:34:04.276 "name": "nvme0", 00:34:04.276 "trtype": "tcp", 00:34:04.276 "traddr": "10.0.0.3", 00:34:04.276 "adrfam": "ipv4", 00:34:04.276 "trsvcid": "4420", 00:34:04.276 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:04.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:34:04.276 "prchk_reftag": false, 00:34:04.276 "prchk_guard": false, 00:34:04.276 "hdgst": false, 00:34:04.276 "ddgst": false, 00:34:04.276 "dhchap_key": "key0", 00:34:04.276 "dhchap_ctrlr_key": "key1", 00:34:04.276 "allow_unrecognized_csi": false 00:34:04.276 } 00:34:04.276 } 00:34:04.276 Got JSON-RPC error response 00:34:04.276 GoRPCClient: error on JSON-RPC call 00:34:04.276 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:34:04.276 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:04.276 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:04.276 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:04.276 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:34:04.276 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:34:04.276 05:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:34:04.535 nvme0n1 00:34:04.535 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:34:04.535 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:34:04.535 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:04.809 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.809 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:04.809 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:05.128 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 00:34:05.128 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.128 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:05.128 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.128 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:34:05.128 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:34:05.128 05:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:34:06.064 nvme0n1 00:34:06.064 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:34:06.064 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:06.064 05:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:34:06.322 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.322 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key key3 00:34:06.322 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.322 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:06.322 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.322 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:34:06.322 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:34:06.322 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:06.581 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.581 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:34:06.581 05:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid 9b54f445-f067-44f7-9ac3-ecdb7acf7203 -l 0 --dhchap-secret DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: --dhchap-ctrl-secret DHHC-1:03:N2RiMzY2ODQ1ZWIxMDA0ZjEzMjkxZjY0MzhkMjk4OTIwYmU0ZmZiMTZjOGFhYzE2ZWM2NTI1N2EwNGQ4YjMwMLpLaX8=: 00:34:07.146 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:34:07.146 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:34:07.146 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:34:07.146 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:34:07.146 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:34:07.146 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:34:07.146 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:34:07.146 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:07.146 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:07.405 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:34:07.405 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:34:07.405 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:34:07.405 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:34:07.405 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.405 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:34:07.405 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.405 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:34:07.405 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:34:07.405 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:34:07.972 2024/11/20 05:45:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:34:07.972 request: 00:34:07.972 { 00:34:07.972 "method": "bdev_nvme_attach_controller", 00:34:07.972 "params": { 00:34:07.972 "name": "nvme0", 00:34:07.972 "trtype": "tcp", 00:34:07.972 "traddr": "10.0.0.3", 00:34:07.972 "adrfam": "ipv4", 00:34:07.972 "trsvcid": "4420", 00:34:07.972 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:34:07.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203", 00:34:07.972 "prchk_reftag": false, 00:34:07.972 "prchk_guard": false, 00:34:07.972 "hdgst": false, 00:34:07.972 "ddgst": false, 00:34:07.972 "dhchap_key": "key1", 00:34:07.972 "allow_unrecognized_csi": false 00:34:07.972 } 00:34:07.972 } 00:34:07.972 Got JSON-RPC error response 00:34:07.972 GoRPCClient: error on JSON-RPC call 00:34:07.972 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:34:07.972 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:07.973 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:07.973 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:07.973 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:34:07.973 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:34:07.973 05:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:34:08.909 nvme0n1 00:34:09.167 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:34:09.167 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:09.167 05:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:34:09.425 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.425 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:09.425 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:09.682 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:34:09.682 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.682 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:09.682 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.682 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:34:09.682 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:34:09.682 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:34:10.248 nvme0n1 00:34:10.248 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:34:10.248 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:34:10.248 05:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:10.248 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.248 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:34:10.248 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key key3 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: '' 2s 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: ]] 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTExZWI2NWM3NWJmMWFlNTcyZDkyZjI4OTNkNDRkNWU3sX2x: 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:34:10.814 05:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key1 --dhchap-ctrlr-key key2 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: 2s 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: ]] 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:M2NjZjc3ZjU5YTY4NDYwODI2MzllZmJlZDQ5ODI5OGIzMWUwN2JmYjY3ZWIxNjRlY6TzaA==: 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:34:12.715 05:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:34:14.617 05:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:34:14.617 05:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:34:14.617 05:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:34:14.617 05:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:34:14.875 05:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:34:14.875 05:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:34:14.875 05:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:34:14.875 05:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:34:14.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:34:14.876 05:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key key1 00:34:14.876 05:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.876 05:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:14.876 05:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.876 05:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:14.876 05:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:14.876 05:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:15.811 nvme0n1 00:34:15.811 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key key3 00:34:15.811 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.811 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:15.811 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.811 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:34:15.811 05:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:34:16.744 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:34:16.744 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:34:16.744 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:16.744 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.744 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:34:16.744 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.744 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:16.744 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.744 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:34:16.744 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:34:17.309 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:34:17.309 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:34:17.309 05:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:17.568 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.568 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key key3 00:34:17.568 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.568 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:17.568 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.568 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:34:17.568 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:34:17.568 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:34:17.568 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:34:17.568 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:17.568 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:34:17.568 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:17.568 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:34:17.568 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:34:18.134 2024/11/20 05:45:37 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:34:18.134 request: 00:34:18.134 { 00:34:18.134 "method": "bdev_nvme_set_keys", 00:34:18.134 "params": { 00:34:18.134 "name": "nvme0", 00:34:18.134 "dhchap_key": "key1", 00:34:18.134 "dhchap_ctrlr_key": "key3" 00:34:18.134 } 00:34:18.134 } 00:34:18.134 Got JSON-RPC error response 00:34:18.134 GoRPCClient: error on JSON-RPC call 00:34:18.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:34:18.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:18.134 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:18.135 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:18.135 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:34:18.135 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:18.135 05:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:34:18.392 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:34:18.393 05:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:34:19.326 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:34:19.326 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:34:19.326 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:19.584 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:34:19.584 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key0 --dhchap-ctrlr-key key1 00:34:19.584 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.584 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:19.584 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.584 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:19.584 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:19.584 05:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:20.516 nvme0n1 00:34:20.516 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --dhchap-key key2 --dhchap-ctrlr-key key3 00:34:20.516 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.516 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:20.774 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.774 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:34:20.774 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:34:20.774 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:34:20.774 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:34:20.774 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:20.774 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:34:20.774 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:20.774 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:34:20.774 05:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:34:21.340 2024/11/20 05:45:41 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:34:21.340 request: 00:34:21.340 { 00:34:21.340 "method": "bdev_nvme_set_keys", 00:34:21.340 "params": { 00:34:21.340 "name": "nvme0", 00:34:21.340 "dhchap_key": "key2", 00:34:21.340 "dhchap_ctrlr_key": "key0" 00:34:21.340 } 00:34:21.340 } 00:34:21.340 Got JSON-RPC error response 00:34:21.340 GoRPCClient: error on JSON-RPC call 00:34:21.340 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:34:21.340 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:21.340 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:21.340 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:21.340 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:34:21.340 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:34:21.340 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:21.597 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:34:21.597 05:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:34:22.528 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:34:22.528 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:34:22.528 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:34:23.091 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:34:23.091 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:34:23.091 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:34:23.091 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 76860 00:34:23.091 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 76860 ']' 00:34:23.091 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 76860 00:34:23.091 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:34:23.091 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:23.091 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76860 00:34:23.092 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:23.092 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:23.092 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76860' 00:34:23.092 killing process with pid 76860 00:34:23.092 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 76860 00:34:23.092 05:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 76860 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:23.349 rmmod nvme_tcp 00:34:23.349 rmmod nvme_fabrics 00:34:23.349 rmmod nvme_keyring 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 81698 ']' 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 81698 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 81698 ']' 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 81698 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:23.349 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81698 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:23.608 killing process with pid 81698 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81698' 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 81698 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 81698 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:23.608 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.3Cd /tmp/spdk.key-sha256.Uz3 /tmp/spdk.key-sha384.UsH /tmp/spdk.key-sha512.8UX /tmp/spdk.key-sha512.Tkh /tmp/spdk.key-sha384.w4v /tmp/spdk.key-sha256.uFb '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:34:23.865 00:34:23.865 real 3m5.748s 00:34:23.865 user 7m18.865s 00:34:23.865 sys 0m33.451s 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:34:23.865 ************************************ 00:34:23.865 END TEST nvmf_auth_target 00:34:23.865 ************************************ 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:23.865 05:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:23.866 05:45:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:34:23.866 ************************************ 00:34:23.866 START TEST nvmf_bdevio_no_huge 00:34:23.866 ************************************ 00:34:23.866 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:34:24.125 * Looking for test storage... 00:34:24.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:24.125 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:24.125 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:34:24.125 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:24.125 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:24.125 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:24.125 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:24.125 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:24.125 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:34:24.125 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:34:24.125 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:34:24.125 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:24.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.126 --rc genhtml_branch_coverage=1 00:34:24.126 --rc genhtml_function_coverage=1 00:34:24.126 --rc genhtml_legend=1 00:34:24.126 --rc geninfo_all_blocks=1 00:34:24.126 --rc geninfo_unexecuted_blocks=1 00:34:24.126 00:34:24.126 ' 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:24.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.126 --rc genhtml_branch_coverage=1 00:34:24.126 --rc genhtml_function_coverage=1 00:34:24.126 --rc genhtml_legend=1 00:34:24.126 --rc geninfo_all_blocks=1 00:34:24.126 --rc geninfo_unexecuted_blocks=1 00:34:24.126 00:34:24.126 ' 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:24.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.126 --rc genhtml_branch_coverage=1 00:34:24.126 --rc genhtml_function_coverage=1 00:34:24.126 --rc genhtml_legend=1 00:34:24.126 --rc geninfo_all_blocks=1 00:34:24.126 --rc geninfo_unexecuted_blocks=1 00:34:24.126 00:34:24.126 ' 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:24.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.126 --rc genhtml_branch_coverage=1 00:34:24.126 --rc genhtml_function_coverage=1 00:34:24.126 --rc genhtml_legend=1 00:34:24.126 --rc geninfo_all_blocks=1 00:34:24.126 --rc geninfo_unexecuted_blocks=1 00:34:24.126 00:34:24.126 ' 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:24.126 05:45:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:34:24.126 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:24.127 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:24.127 Cannot find device "nvmf_init_br" 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:34:24.127 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:24.389 Cannot find device "nvmf_init_br2" 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:24.389 Cannot find device "nvmf_tgt_br" 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:24.389 Cannot find device "nvmf_tgt_br2" 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:24.389 Cannot find device "nvmf_init_br" 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:24.389 Cannot find device "nvmf_init_br2" 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:24.389 Cannot find device "nvmf_tgt_br" 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:24.389 Cannot find device "nvmf_tgt_br2" 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:24.389 Cannot find device "nvmf_br" 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:24.389 Cannot find device "nvmf_init_if" 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:24.389 Cannot find device "nvmf_init_if2" 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:24.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:24.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:24.389 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:24.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:24.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.123 ms 00:34:24.645 00:34:24.645 --- 10.0.0.3 ping statistics --- 00:34:24.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:24.645 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:24.645 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:24.645 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:34:24.645 00:34:24.645 --- 10.0.0.4 ping statistics --- 00:34:24.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:24.645 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:24.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:24.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:34:24.645 00:34:24.645 --- 10.0.0.1 ping statistics --- 00:34:24.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:24.645 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:24.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:24.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:34:24.645 00:34:24.645 --- 10.0.0.2 ping statistics --- 00:34:24.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:24.645 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:24.645 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=82553 00:34:24.646 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 82553 00:34:24.646 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 82553 ']' 00:34:24.646 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:34:24.646 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:24.646 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:24.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:24.646 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:24.646 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:24.646 05:45:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:24.646 [2024-11-20 05:45:44.560047] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:34:24.646 [2024-11-20 05:45:44.560160] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:34:24.904 [2024-11-20 05:45:44.723603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:24.904 [2024-11-20 05:45:44.797885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:24.904 [2024-11-20 05:45:44.797958] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:24.904 [2024-11-20 05:45:44.797969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:24.904 [2024-11-20 05:45:44.797979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:24.904 [2024-11-20 05:45:44.797986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:24.904 [2024-11-20 05:45:44.798496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:24.904 [2024-11-20 05:45:44.798596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:24.904 [2024-11-20 05:45:44.799736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:24.904 [2024-11-20 05:45:44.799761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:25.838 [2024-11-20 05:45:45.565498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:25.838 Malloc0 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.838 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:25.839 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.839 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:25.839 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.839 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:25.839 [2024-11-20 05:45:45.609857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:25.839 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.839 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:34:25.839 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:25.839 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:34:25.839 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:34:25.839 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:25.839 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:25.839 { 00:34:25.839 "params": { 00:34:25.839 "name": "Nvme$subsystem", 00:34:25.839 "trtype": "$TEST_TRANSPORT", 00:34:25.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:25.839 "adrfam": "ipv4", 00:34:25.839 "trsvcid": "$NVMF_PORT", 00:34:25.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:25.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:25.839 "hdgst": ${hdgst:-false}, 00:34:25.839 "ddgst": ${ddgst:-false} 00:34:25.839 }, 00:34:25.839 "method": "bdev_nvme_attach_controller" 00:34:25.839 } 00:34:25.839 EOF 00:34:25.839 )") 00:34:25.839 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:34:25.839 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:34:25.839 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:34:25.839 05:45:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:25.839 "params": { 00:34:25.839 "name": "Nvme1", 00:34:25.839 "trtype": "tcp", 00:34:25.839 "traddr": "10.0.0.3", 00:34:25.839 "adrfam": "ipv4", 00:34:25.839 "trsvcid": "4420", 00:34:25.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:25.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:25.839 "hdgst": false, 00:34:25.839 "ddgst": false 00:34:25.839 }, 00:34:25.839 "method": "bdev_nvme_attach_controller" 00:34:25.839 }' 00:34:25.839 [2024-11-20 05:45:45.675423] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:34:25.839 [2024-11-20 05:45:45.675535] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82607 ] 00:34:26.097 [2024-11-20 05:45:45.845928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:26.097 [2024-11-20 05:45:45.939707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:26.097 [2024-11-20 05:45:45.940061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:26.097 [2024-11-20 05:45:45.940066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.354 I/O targets: 00:34:26.354 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:26.354 00:34:26.354 00:34:26.354 CUnit - A unit testing framework for C - Version 2.1-3 00:34:26.354 http://cunit.sourceforge.net/ 00:34:26.354 00:34:26.354 00:34:26.354 Suite: bdevio tests on: Nvme1n1 00:34:26.354 Test: blockdev write read block ...passed 00:34:26.612 Test: blockdev write zeroes read block ...passed 00:34:26.612 Test: blockdev write zeroes read no split ...passed 00:34:26.612 Test: blockdev write zeroes read split ...passed 00:34:26.612 Test: blockdev write zeroes read split partial ...passed 00:34:26.612 Test: blockdev reset ...[2024-11-20 05:45:46.333813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:26.612 [2024-11-20 05:45:46.333912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1803380 (9): Bad file descriptor 00:34:26.612 [2024-11-20 05:45:46.349775] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:26.612 passed 00:34:26.612 Test: blockdev write read 8 blocks ...passed 00:34:26.612 Test: blockdev write read size > 128k ...passed 00:34:26.612 Test: blockdev write read invalid size ...passed 00:34:26.613 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:26.613 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:26.613 Test: blockdev write read max offset ...passed 00:34:26.613 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:26.613 Test: blockdev writev readv 8 blocks ...passed 00:34:26.613 Test: blockdev writev readv 30 x 1block ...passed 00:34:26.613 Test: blockdev writev readv block ...passed 00:34:26.613 Test: blockdev writev readv size > 128k ...passed 00:34:26.613 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:26.613 Test: blockdev comparev and writev ...[2024-11-20 05:45:46.521853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:26.613 [2024-11-20 05:45:46.521890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:26.613 [2024-11-20 05:45:46.521908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:26.613 [2024-11-20 05:45:46.521918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:26.613 [2024-11-20 05:45:46.522343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:26.613 [2024-11-20 05:45:46.522368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:26.613 [2024-11-20 05:45:46.522384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:26.613 [2024-11-20 05:45:46.522395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:26.613 [2024-11-20 05:45:46.522663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:26.613 [2024-11-20 05:45:46.522676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:26.613 [2024-11-20 05:45:46.522691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:26.613 [2024-11-20 05:45:46.522701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:26.613 [2024-11-20 05:45:46.522933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:26.613 [2024-11-20 05:45:46.522945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:26.613 [2024-11-20 05:45:46.522960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:26.613 [2024-11-20 05:45:46.522969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:26.871 passed 00:34:26.871 Test: blockdev nvme passthru rw ...passed 00:34:26.871 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:45:46.605780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:26.871 [2024-11-20 05:45:46.605809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:26.871 [2024-11-20 05:45:46.605995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:26.871 [2024-11-20 05:45:46.606010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:26.871 [2024-11-20 05:45:46.606308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:26.871 [2024-11-20 05:45:46.606331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:26.871 [2024-11-20 05:45:46.606441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:26.871 [2024-11-20 05:45:46.606466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:26.871 passed 00:34:26.871 Test: blockdev nvme admin passthru ...passed 00:34:26.871 Test: blockdev copy ...passed 00:34:26.871 00:34:26.871 Run Summary: Type Total Ran Passed Failed Inactive 00:34:26.871 suites 1 1 n/a 0 0 00:34:26.871 tests 23 23 23 0 0 00:34:26.871 asserts 152 152 152 0 n/a 00:34:26.871 00:34:26.871 Elapsed time = 0.934 seconds 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:27.437 rmmod nvme_tcp 00:34:27.437 rmmod nvme_fabrics 00:34:27.437 rmmod nvme_keyring 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 82553 ']' 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 82553 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 82553 ']' 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 82553 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82553 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82553' 00:34:27.437 killing process with pid 82553 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 82553 00:34:27.437 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 82553 00:34:28.070 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:28.070 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:28.070 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:28.070 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:34:28.070 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:34:28.070 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:28.070 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:34:28.070 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:34:28.071 00:34:28.071 real 0m4.157s 00:34:28.071 user 0m13.710s 00:34:28.071 sys 0m1.793s 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:34:28.071 ************************************ 00:34:28.071 END TEST nvmf_bdevio_no_huge 00:34:28.071 ************************************ 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:28.071 05:45:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:34:28.343 ************************************ 00:34:28.343 START TEST nvmf_tls 00:34:28.343 ************************************ 00:34:28.343 05:45:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:34:28.343 * Looking for test storage... 00:34:28.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:28.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.343 --rc genhtml_branch_coverage=1 00:34:28.343 --rc genhtml_function_coverage=1 00:34:28.343 --rc genhtml_legend=1 00:34:28.343 --rc geninfo_all_blocks=1 00:34:28.343 --rc geninfo_unexecuted_blocks=1 00:34:28.343 00:34:28.343 ' 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:28.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.343 --rc genhtml_branch_coverage=1 00:34:28.343 --rc genhtml_function_coverage=1 00:34:28.343 --rc genhtml_legend=1 00:34:28.343 --rc geninfo_all_blocks=1 00:34:28.343 --rc geninfo_unexecuted_blocks=1 00:34:28.343 00:34:28.343 ' 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:28.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.343 --rc genhtml_branch_coverage=1 00:34:28.343 --rc genhtml_function_coverage=1 00:34:28.343 --rc genhtml_legend=1 00:34:28.343 --rc geninfo_all_blocks=1 00:34:28.343 --rc geninfo_unexecuted_blocks=1 00:34:28.343 00:34:28.343 ' 00:34:28.343 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:28.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.343 --rc genhtml_branch_coverage=1 00:34:28.343 --rc genhtml_function_coverage=1 00:34:28.343 --rc genhtml_legend=1 00:34:28.344 --rc geninfo_all_blocks=1 00:34:28.344 --rc geninfo_unexecuted_blocks=1 00:34:28.344 00:34:28.344 ' 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:28.344 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:28.344 Cannot find device "nvmf_init_br" 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:28.344 Cannot find device "nvmf_init_br2" 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:28.344 Cannot find device "nvmf_tgt_br" 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:34:28.344 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:28.344 Cannot find device "nvmf_tgt_br2" 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:28.602 Cannot find device "nvmf_init_br" 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:28.602 Cannot find device "nvmf_init_br2" 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:28.602 Cannot find device "nvmf_tgt_br" 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:28.602 Cannot find device "nvmf_tgt_br2" 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:28.602 Cannot find device "nvmf_br" 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:28.602 Cannot find device "nvmf_init_if" 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:28.602 Cannot find device "nvmf_init_if2" 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:28.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:28.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:28.602 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:28.603 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:28.603 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:28.603 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:28.603 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:28.603 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:28.603 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:28.603 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:28.603 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:28.603 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:28.861 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:28.861 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:34:28.861 00:34:28.861 --- 10.0.0.3 ping statistics --- 00:34:28.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.861 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:28.861 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:28.861 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:34:28.861 00:34:28.861 --- 10.0.0.4 ping statistics --- 00:34:28.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.861 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:28.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:28.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:34:28.861 00:34:28.861 --- 10.0.0.1 ping statistics --- 00:34:28.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.861 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:28.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:28.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:34:28.861 00:34:28.861 --- 10.0.0.2 ping statistics --- 00:34:28.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:28.861 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82850 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82850 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 82850 ']' 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:28.861 05:45:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:28.861 [2024-11-20 05:45:48.769632] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:34:28.862 [2024-11-20 05:45:48.769976] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:29.120 [2024-11-20 05:45:48.933414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.120 [2024-11-20 05:45:48.992265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:29.120 [2024-11-20 05:45:48.992561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:29.120 [2024-11-20 05:45:48.992586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:29.120 [2024-11-20 05:45:48.992599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:29.120 [2024-11-20 05:45:48.992610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:29.120 [2024-11-20 05:45:48.992987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:30.054 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:30.054 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:34:30.054 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:30.054 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:30.054 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:30.054 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:30.054 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:34:30.054 05:45:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:34:30.312 true 00:34:30.312 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:34:30.312 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:30.569 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:34:30.569 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:34:30.569 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:34:30.827 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:30.827 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:34:31.085 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:34:31.085 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:34:31.085 05:45:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:34:31.343 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:31.343 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:34:31.601 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:34:31.601 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:34:31.601 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:31.601 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:34:31.860 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:34:31.860 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:34:31.860 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:34:32.119 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:34:32.119 05:45:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:32.377 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:34:32.377 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:34:32.377 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:34:32.636 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:34:32.636 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Ey90tFgkn9 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.8WyP6yRnPi 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Ey90tFgkn9 00:34:32.894 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.8WyP6yRnPi 00:34:33.153 05:45:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:34:33.153 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:34:33.412 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Ey90tFgkn9 00:34:33.412 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ey90tFgkn9 00:34:33.412 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:34:33.978 [2024-11-20 05:45:53.594599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:33.978 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:34:34.235 05:45:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:34:34.494 [2024-11-20 05:45:54.174709] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:34.494 [2024-11-20 05:45:54.174913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:34.494 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:34:34.757 malloc0 00:34:34.757 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:35.015 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ey90tFgkn9 00:34:35.272 05:45:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:34:35.530 05:45:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Ey90tFgkn9 00:34:45.504 Initializing NVMe Controllers 00:34:45.504 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:34:45.504 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:45.504 Initialization complete. Launching workers. 00:34:45.504 ======================================================== 00:34:45.504 Latency(us) 00:34:45.504 Device Information : IOPS MiB/s Average min max 00:34:45.504 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14344.80 56.03 4461.99 1717.81 6540.00 00:34:45.504 ======================================================== 00:34:45.504 Total : 14344.80 56.03 4461.99 1717.81 6540.00 00:34:45.504 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ey90tFgkn9 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ey90tFgkn9 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83215 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83215 /var/tmp/bdevperf.sock 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83215 ']' 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:45.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:45.504 05:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:45.762 [2024-11-20 05:46:05.472723] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:34:45.762 [2024-11-20 05:46:05.472825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83215 ] 00:34:45.762 [2024-11-20 05:46:05.632399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.021 [2024-11-20 05:46:05.695358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:46.586 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:46.586 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:34:46.586 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ey90tFgkn9 00:34:46.845 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:34:47.103 [2024-11-20 05:46:06.877437] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:47.103 TLSTESTn1 00:34:47.103 05:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:34:47.361 Running I/O for 10 seconds... 00:34:49.240 5123.00 IOPS, 20.01 MiB/s [2024-11-20T05:46:10.536Z] 5113.00 IOPS, 19.97 MiB/s [2024-11-20T05:46:11.471Z] 5156.00 IOPS, 20.14 MiB/s [2024-11-20T05:46:12.408Z] 5149.00 IOPS, 20.11 MiB/s [2024-11-20T05:46:13.346Z] 5162.40 IOPS, 20.17 MiB/s [2024-11-20T05:46:14.284Z] 5164.83 IOPS, 20.18 MiB/s [2024-11-20T05:46:15.221Z] 5154.86 IOPS, 20.14 MiB/s [2024-11-20T05:46:16.158Z] 5158.38 IOPS, 20.15 MiB/s [2024-11-20T05:46:17.536Z] 5166.11 IOPS, 20.18 MiB/s [2024-11-20T05:46:17.536Z] 5153.40 IOPS, 20.13 MiB/s 00:34:57.617 Latency(us) 00:34:57.617 [2024-11-20T05:46:17.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.617 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:57.617 Verification LBA range: start 0x0 length 0x2000 00:34:57.617 TLSTESTn1 : 10.01 5159.62 20.15 0.00 0.00 24770.03 3885.35 27462.70 00:34:57.617 [2024-11-20T05:46:17.536Z] =================================================================================================================== 00:34:57.617 [2024-11-20T05:46:17.536Z] Total : 5159.62 20.15 0.00 0.00 24770.03 3885.35 27462.70 00:34:57.617 { 00:34:57.617 "results": [ 00:34:57.617 { 00:34:57.617 "job": "TLSTESTn1", 00:34:57.617 "core_mask": "0x4", 00:34:57.617 "workload": "verify", 00:34:57.617 "status": "finished", 00:34:57.617 "verify_range": { 00:34:57.617 "start": 0, 00:34:57.617 "length": 8192 00:34:57.617 }, 00:34:57.617 "queue_depth": 128, 00:34:57.617 "io_size": 4096, 00:34:57.617 "runtime": 10.012162, 00:34:57.617 "iops": 5159.6248642401115, 00:34:57.617 "mibps": 20.154784625937936, 00:34:57.617 "io_failed": 0, 00:34:57.617 "io_timeout": 0, 00:34:57.617 "avg_latency_us": 24770.02773358996, 00:34:57.617 "min_latency_us": 3885.3485714285716, 00:34:57.617 "max_latency_us": 27462.704761904763 00:34:57.617 } 00:34:57.617 ], 00:34:57.617 "core_count": 1 00:34:57.617 } 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83215 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83215 ']' 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83215 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83215 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83215' 00:34:57.617 killing process with pid 83215 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83215 00:34:57.617 Received shutdown signal, test time was about 10.000000 seconds 00:34:57.617 00:34:57.617 Latency(us) 00:34:57.617 [2024-11-20T05:46:17.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:57.617 [2024-11-20T05:46:17.536Z] =================================================================================================================== 00:34:57.617 [2024-11-20T05:46:17.536Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83215 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8WyP6yRnPi 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8WyP6yRnPi 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8WyP6yRnPi 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8WyP6yRnPi 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83379 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83379 /var/tmp/bdevperf.sock 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83379 ']' 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:57.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:57.617 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:57.617 [2024-11-20 05:46:17.400134] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:34:57.617 [2024-11-20 05:46:17.400211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83379 ] 00:34:57.876 [2024-11-20 05:46:17.537945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.876 [2024-11-20 05:46:17.590159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:57.876 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:57.876 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:34:57.876 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8WyP6yRnPi 00:34:58.135 05:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:34:58.394 [2024-11-20 05:46:18.193883] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:58.394 [2024-11-20 05:46:18.202704] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:58.394 [2024-11-20 05:46:18.203564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x816ac0 (107): Transport endpoint is not connected 00:34:58.394 [2024-11-20 05:46:18.204542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x816ac0 (9): Bad file descriptor 00:34:58.394 [2024-11-20 05:46:18.205540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:34:58.394 [2024-11-20 05:46:18.205563] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:34:58.394 [2024-11-20 05:46:18.205573] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:34:58.394 [2024-11-20 05:46:18.205589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:34:58.394 2024/11/20 05:46:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:34:58.394 request: 00:34:58.394 { 00:34:58.394 "method": "bdev_nvme_attach_controller", 00:34:58.394 "params": { 00:34:58.394 "name": "TLSTEST", 00:34:58.394 "trtype": "tcp", 00:34:58.394 "traddr": "10.0.0.3", 00:34:58.394 "adrfam": "ipv4", 00:34:58.394 "trsvcid": "4420", 00:34:58.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:58.394 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:58.394 "prchk_reftag": false, 00:34:58.394 "prchk_guard": false, 00:34:58.394 "hdgst": false, 00:34:58.394 "ddgst": false, 00:34:58.394 "psk": "key0", 00:34:58.394 "allow_unrecognized_csi": false 00:34:58.394 } 00:34:58.394 } 00:34:58.394 Got JSON-RPC error response 00:34:58.394 GoRPCClient: error on JSON-RPC call 00:34:58.394 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83379 00:34:58.394 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83379 ']' 00:34:58.394 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83379 00:34:58.394 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:34:58.394 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:58.394 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83379 00:34:58.394 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:34:58.394 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:34:58.394 killing process with pid 83379 00:34:58.394 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83379' 00:34:58.394 Received shutdown signal, test time was about 10.000000 seconds 00:34:58.394 00:34:58.394 Latency(us) 00:34:58.394 [2024-11-20T05:46:18.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.394 [2024-11-20T05:46:18.313Z] =================================================================================================================== 00:34:58.394 [2024-11-20T05:46:18.313Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:58.394 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83379 00:34:58.394 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83379 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ey90tFgkn9 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ey90tFgkn9 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ey90tFgkn9 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ey90tFgkn9 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83419 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83419 /var/tmp/bdevperf.sock 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83419 ']' 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:58.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:58.653 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:58.653 [2024-11-20 05:46:18.488918] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:34:58.653 [2024-11-20 05:46:18.488994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83419 ] 00:34:58.913 [2024-11-20 05:46:18.631184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.913 [2024-11-20 05:46:18.678908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:58.913 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:58.913 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:34:58.913 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ey90tFgkn9 00:34:59.193 05:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:34:59.451 [2024-11-20 05:46:19.166329] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:59.451 [2024-11-20 05:46:19.172692] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:34:59.451 [2024-11-20 05:46:19.172737] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:34:59.451 [2024-11-20 05:46:19.172802] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:59.451 [2024-11-20 05:46:19.172920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d3ac0 (107): Transport endpoint is not connected 00:34:59.451 [2024-11-20 05:46:19.173910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d3ac0 (9): Bad file descriptor 00:34:59.451 [2024-11-20 05:46:19.174906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:34:59.451 [2024-11-20 05:46:19.174929] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:34:59.451 [2024-11-20 05:46:19.174939] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:34:59.451 [2024-11-20 05:46:19.174954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:34:59.451 2024/11/20 05:46:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:34:59.451 request: 00:34:59.451 { 00:34:59.451 "method": "bdev_nvme_attach_controller", 00:34:59.451 "params": { 00:34:59.451 "name": "TLSTEST", 00:34:59.451 "trtype": "tcp", 00:34:59.451 "traddr": "10.0.0.3", 00:34:59.451 "adrfam": "ipv4", 00:34:59.451 "trsvcid": "4420", 00:34:59.451 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:59.451 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:59.451 "prchk_reftag": false, 00:34:59.451 "prchk_guard": false, 00:34:59.451 "hdgst": false, 00:34:59.451 "ddgst": false, 00:34:59.451 "psk": "key0", 00:34:59.451 "allow_unrecognized_csi": false 00:34:59.452 } 00:34:59.452 } 00:34:59.452 Got JSON-RPC error response 00:34:59.452 GoRPCClient: error on JSON-RPC call 00:34:59.452 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83419 00:34:59.452 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83419 ']' 00:34:59.452 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83419 00:34:59.452 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:34:59.452 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:59.452 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83419 00:34:59.452 killing process with pid 83419 00:34:59.452 Received shutdown signal, test time was about 10.000000 seconds 00:34:59.452 00:34:59.452 Latency(us) 00:34:59.452 [2024-11-20T05:46:19.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.452 [2024-11-20T05:46:19.371Z] =================================================================================================================== 00:34:59.452 [2024-11-20T05:46:19.371Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:59.452 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:34:59.452 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:34:59.452 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83419' 00:34:59.452 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83419 00:34:59.452 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83419 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ey90tFgkn9 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ey90tFgkn9 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ey90tFgkn9 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ey90tFgkn9 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83458 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83458 /var/tmp/bdevperf.sock 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83458 ']' 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:59.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:59.710 05:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:34:59.710 [2024-11-20 05:46:19.484753] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:34:59.710 [2024-11-20 05:46:19.484860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83458 ] 00:34:59.967 [2024-11-20 05:46:19.637981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.967 [2024-11-20 05:46:19.685923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:00.535 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:00.535 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:00.535 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ey90tFgkn9 00:35:00.794 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:35:01.054 [2024-11-20 05:46:20.877335] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:01.054 [2024-11-20 05:46:20.882767] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:35:01.054 [2024-11-20 05:46:20.882803] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:35:01.054 [2024-11-20 05:46:20.882851] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:01.054 [2024-11-20 05:46:20.883202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ddac0 (107): Transport endpoint is not connected 00:35:01.054 [2024-11-20 05:46:20.884193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ddac0 (9): Bad file descriptor 00:35:01.054 [2024-11-20 05:46:20.885190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:35:01.054 [2024-11-20 05:46:20.885212] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:35:01.054 [2024-11-20 05:46:20.885222] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:35:01.054 [2024-11-20 05:46:20.885239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:35:01.054 2024/11/20 05:46:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:35:01.054 request: 00:35:01.054 { 00:35:01.054 "method": "bdev_nvme_attach_controller", 00:35:01.054 "params": { 00:35:01.054 "name": "TLSTEST", 00:35:01.054 "trtype": "tcp", 00:35:01.054 "traddr": "10.0.0.3", 00:35:01.054 "adrfam": "ipv4", 00:35:01.054 "trsvcid": "4420", 00:35:01.054 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:01.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:01.054 "prchk_reftag": false, 00:35:01.054 "prchk_guard": false, 00:35:01.054 "hdgst": false, 00:35:01.054 "ddgst": false, 00:35:01.054 "psk": "key0", 00:35:01.054 "allow_unrecognized_csi": false 00:35:01.054 } 00:35:01.054 } 00:35:01.054 Got JSON-RPC error response 00:35:01.054 GoRPCClient: error on JSON-RPC call 00:35:01.054 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83458 00:35:01.054 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83458 ']' 00:35:01.054 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83458 00:35:01.054 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:01.054 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:01.054 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83458 00:35:01.054 killing process with pid 83458 00:35:01.054 Received shutdown signal, test time was about 10.000000 seconds 00:35:01.054 00:35:01.054 Latency(us) 00:35:01.054 [2024-11-20T05:46:20.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.054 [2024-11-20T05:46:20.973Z] =================================================================================================================== 00:35:01.054 [2024-11-20T05:46:20.973Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:01.054 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:35:01.054 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:35:01.054 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83458' 00:35:01.054 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83458 00:35:01.054 05:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83458 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83511 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83511 /var/tmp/bdevperf.sock 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83511 ']' 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:01.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:01.313 05:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:01.573 [2024-11-20 05:46:21.277227] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:01.573 [2024-11-20 05:46:21.277600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83511 ] 00:35:01.573 [2024-11-20 05:46:21.425315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.832 [2024-11-20 05:46:21.505295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:02.400 05:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:02.400 05:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:02.400 05:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:35:02.658 [2024-11-20 05:46:22.507006] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:35:02.658 [2024-11-20 05:46:22.507068] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:02.658 2024/11/20 05:46:22 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:35:02.658 request: 00:35:02.658 { 00:35:02.658 "method": "keyring_file_add_key", 00:35:02.658 "params": { 00:35:02.658 "name": "key0", 00:35:02.658 "path": "" 00:35:02.658 } 00:35:02.658 } 00:35:02.658 Got JSON-RPC error response 00:35:02.658 GoRPCClient: error on JSON-RPC call 00:35:02.658 05:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:35:02.917 [2024-11-20 05:46:22.715193] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:02.917 [2024-11-20 05:46:22.715269] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:35:02.917 2024/11/20 05:46:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:35:02.917 request: 00:35:02.917 { 00:35:02.917 "method": "bdev_nvme_attach_controller", 00:35:02.917 "params": { 00:35:02.917 "name": "TLSTEST", 00:35:02.917 "trtype": "tcp", 00:35:02.917 "traddr": "10.0.0.3", 00:35:02.917 "adrfam": "ipv4", 00:35:02.917 "trsvcid": "4420", 00:35:02.917 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:02.917 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:02.917 "prchk_reftag": false, 00:35:02.917 "prchk_guard": false, 00:35:02.917 "hdgst": false, 00:35:02.917 "ddgst": false, 00:35:02.917 "psk": "key0", 00:35:02.917 "allow_unrecognized_csi": false 00:35:02.917 } 00:35:02.917 } 00:35:02.917 Got JSON-RPC error response 00:35:02.917 GoRPCClient: error on JSON-RPC call 00:35:02.917 05:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83511 00:35:02.917 05:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83511 ']' 00:35:02.917 05:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83511 00:35:02.917 05:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:02.917 05:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:02.917 05:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83511 00:35:02.917 killing process with pid 83511 00:35:02.917 Received shutdown signal, test time was about 10.000000 seconds 00:35:02.917 00:35:02.917 Latency(us) 00:35:02.917 [2024-11-20T05:46:22.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.917 [2024-11-20T05:46:22.836Z] =================================================================================================================== 00:35:02.917 [2024-11-20T05:46:22.836Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:02.917 05:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:35:02.917 05:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:35:02.917 05:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83511' 00:35:02.917 05:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83511 00:35:02.917 05:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83511 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 82850 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 82850 ']' 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 82850 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82850 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82850' 00:35:03.176 killing process with pid 82850 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 82850 00:35:03.176 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 82850 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.fX5V20kLgP 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.fX5V20kLgP 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83579 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83579 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83579 ']' 00:35:03.744 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.745 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:03.745 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.745 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:03.745 05:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:03.745 [2024-11-20 05:46:23.501063] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:03.745 [2024-11-20 05:46:23.501150] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.745 [2024-11-20 05:46:23.642765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.002 [2024-11-20 05:46:23.714913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:04.002 [2024-11-20 05:46:23.714966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:04.002 [2024-11-20 05:46:23.714976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:04.002 [2024-11-20 05:46:23.714985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:04.002 [2024-11-20 05:46:23.714992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:04.002 [2024-11-20 05:46:23.715368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.568 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:04.568 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:04.568 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:04.568 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:04.568 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:04.827 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:04.827 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.fX5V20kLgP 00:35:04.827 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fX5V20kLgP 00:35:04.827 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:04.827 [2024-11-20 05:46:24.690968] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:04.827 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:35:05.085 05:46:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:35:05.345 [2024-11-20 05:46:25.095024] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:05.345 [2024-11-20 05:46:25.095302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:05.345 05:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:05.604 malloc0 00:35:05.604 05:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:05.864 05:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fX5V20kLgP 00:35:06.123 05:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fX5V20kLgP 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fX5V20kLgP 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83690 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83690 /var/tmp/bdevperf.sock 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83690 ']' 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:06.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:06.382 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:06.382 [2024-11-20 05:46:26.126000] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:06.382 [2024-11-20 05:46:26.126105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83690 ] 00:35:06.641 [2024-11-20 05:46:26.319044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.641 [2024-11-20 05:46:26.385570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:07.209 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:07.209 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:07.209 05:46:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fX5V20kLgP 00:35:07.468 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:35:07.728 [2024-11-20 05:46:27.400474] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:07.728 TLSTESTn1 00:35:07.728 05:46:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:35:07.728 Running I/O for 10 seconds... 00:35:10.041 5649.00 IOPS, 22.07 MiB/s [2024-11-20T05:46:30.896Z] 5632.00 IOPS, 22.00 MiB/s [2024-11-20T05:46:31.832Z] 5636.00 IOPS, 22.02 MiB/s [2024-11-20T05:46:32.770Z] 5637.50 IOPS, 22.02 MiB/s [2024-11-20T05:46:33.706Z] 5638.00 IOPS, 22.02 MiB/s [2024-11-20T05:46:34.641Z] 5600.00 IOPS, 21.88 MiB/s [2024-11-20T05:46:36.017Z] 5580.00 IOPS, 21.80 MiB/s [2024-11-20T05:46:36.950Z] 5562.62 IOPS, 21.73 MiB/s [2024-11-20T05:46:37.885Z] 5553.00 IOPS, 21.69 MiB/s [2024-11-20T05:46:37.885Z] 5537.30 IOPS, 21.63 MiB/s 00:35:17.966 Latency(us) 00:35:17.966 [2024-11-20T05:46:37.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.966 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:17.966 Verification LBA range: start 0x0 length 0x2000 00:35:17.966 TLSTESTn1 : 10.02 5541.70 21.65 0.00 0.00 23058.26 4400.27 27213.04 00:35:17.966 [2024-11-20T05:46:37.886Z] =================================================================================================================== 00:35:17.967 [2024-11-20T05:46:37.886Z] Total : 5541.70 21.65 0.00 0.00 23058.26 4400.27 27213.04 00:35:17.967 { 00:35:17.967 "results": [ 00:35:17.967 { 00:35:17.967 "job": "TLSTESTn1", 00:35:17.967 "core_mask": "0x4", 00:35:17.967 "workload": "verify", 00:35:17.967 "status": "finished", 00:35:17.967 "verify_range": { 00:35:17.967 "start": 0, 00:35:17.967 "length": 8192 00:35:17.967 }, 00:35:17.967 "queue_depth": 128, 00:35:17.967 "io_size": 4096, 00:35:17.967 "runtime": 10.015162, 00:35:17.967 "iops": 5541.6976779806455, 00:35:17.967 "mibps": 21.647256554611896, 00:35:17.967 "io_failed": 0, 00:35:17.967 "io_timeout": 0, 00:35:17.967 "avg_latency_us": 23058.256602635214, 00:35:17.967 "min_latency_us": 4400.274285714286, 00:35:17.967 "max_latency_us": 27213.04380952381 00:35:17.967 } 00:35:17.967 ], 00:35:17.967 "core_count": 1 00:35:17.967 } 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83690 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83690 ']' 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83690 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83690 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83690' 00:35:17.967 killing process with pid 83690 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83690 00:35:17.967 Received shutdown signal, test time was about 10.000000 seconds 00:35:17.967 00:35:17.967 Latency(us) 00:35:17.967 [2024-11-20T05:46:37.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.967 [2024-11-20T05:46:37.886Z] =================================================================================================================== 00:35:17.967 [2024-11-20T05:46:37.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83690 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.fX5V20kLgP 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fX5V20kLgP 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fX5V20kLgP 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:17.967 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fX5V20kLgP 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fX5V20kLgP 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83849 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83849 /var/tmp/bdevperf.sock 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83849 ']' 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:18.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:18.226 05:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:18.226 [2024-11-20 05:46:37.946063] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:18.226 [2024-11-20 05:46:37.946162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83849 ] 00:35:18.226 [2024-11-20 05:46:38.098470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.485 [2024-11-20 05:46:38.153826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:18.485 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:18.485 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:18.485 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fX5V20kLgP 00:35:18.744 [2024-11-20 05:46:38.533479] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fX5V20kLgP': 0100666 00:35:18.744 [2024-11-20 05:46:38.533514] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:18.744 2024/11/20 05:46:38 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.fX5V20kLgP], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:35:18.744 request: 00:35:18.744 { 00:35:18.744 "method": "keyring_file_add_key", 00:35:18.744 "params": { 00:35:18.744 "name": "key0", 00:35:18.744 "path": "/tmp/tmp.fX5V20kLgP" 00:35:18.744 } 00:35:18.744 } 00:35:18.744 Got JSON-RPC error response 00:35:18.744 GoRPCClient: error on JSON-RPC call 00:35:18.744 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:35:19.003 [2024-11-20 05:46:38.834199] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:19.003 [2024-11-20 05:46:38.834255] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:35:19.003 2024/11/20 05:46:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:35:19.003 request: 00:35:19.003 { 00:35:19.003 "method": "bdev_nvme_attach_controller", 00:35:19.003 "params": { 00:35:19.003 "name": "TLSTEST", 00:35:19.003 "trtype": "tcp", 00:35:19.003 "traddr": "10.0.0.3", 00:35:19.003 "adrfam": "ipv4", 00:35:19.003 "trsvcid": "4420", 00:35:19.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:19.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:19.003 "prchk_reftag": false, 00:35:19.003 "prchk_guard": false, 00:35:19.003 "hdgst": false, 00:35:19.003 "ddgst": false, 00:35:19.003 "psk": "key0", 00:35:19.003 "allow_unrecognized_csi": false 00:35:19.003 } 00:35:19.003 } 00:35:19.003 Got JSON-RPC error response 00:35:19.003 GoRPCClient: error on JSON-RPC call 00:35:19.003 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83849 00:35:19.003 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83849 ']' 00:35:19.003 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83849 00:35:19.003 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:19.003 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:19.003 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83849 00:35:19.003 killing process with pid 83849 00:35:19.003 Received shutdown signal, test time was about 10.000000 seconds 00:35:19.003 00:35:19.003 Latency(us) 00:35:19.003 [2024-11-20T05:46:38.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.003 [2024-11-20T05:46:38.922Z] =================================================================================================================== 00:35:19.003 [2024-11-20T05:46:38.922Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:19.003 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:35:19.003 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:35:19.003 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83849' 00:35:19.003 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83849 00:35:19.003 05:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83849 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83579 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83579 ']' 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83579 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83579 00:35:19.262 killing process with pid 83579 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83579' 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83579 00:35:19.262 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83579 00:35:19.520 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:35:19.520 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:19.520 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:19.520 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:19.520 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83899 00:35:19.520 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:19.520 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83899 00:35:19.520 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 83899 ']' 00:35:19.520 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.520 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:19.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.520 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.520 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:19.520 05:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:19.777 [2024-11-20 05:46:39.484610] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:19.777 [2024-11-20 05:46:39.484714] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.777 [2024-11-20 05:46:39.645135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.034 [2024-11-20 05:46:39.703780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:20.034 [2024-11-20 05:46:39.703835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:20.034 [2024-11-20 05:46:39.703851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:20.034 [2024-11-20 05:46:39.703865] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:20.034 [2024-11-20 05:46:39.703877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:20.035 [2024-11-20 05:46:39.704243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.fX5V20kLgP 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.fX5V20kLgP 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.fX5V20kLgP 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fX5V20kLgP 00:35:20.967 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:21.226 [2024-11-20 05:46:40.910852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:21.226 05:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:35:21.484 05:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:35:21.742 [2024-11-20 05:46:41.627037] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:21.742 [2024-11-20 05:46:41.627281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:21.742 05:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:22.308 malloc0 00:35:22.308 05:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:22.566 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fX5V20kLgP 00:35:22.824 [2024-11-20 05:46:42.564696] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fX5V20kLgP': 0100666 00:35:22.824 [2024-11-20 05:46:42.564741] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:22.824 2024/11/20 05:46:42 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.fX5V20kLgP], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:35:22.824 request: 00:35:22.824 { 00:35:22.824 "method": "keyring_file_add_key", 00:35:22.824 "params": { 00:35:22.824 "name": "key0", 00:35:22.824 "path": "/tmp/tmp.fX5V20kLgP" 00:35:22.824 } 00:35:22.824 } 00:35:22.824 Got JSON-RPC error response 00:35:22.824 GoRPCClient: error on JSON-RPC call 00:35:22.824 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:35:23.082 [2024-11-20 05:46:42.768748] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:35:23.082 [2024-11-20 05:46:42.768801] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:35:23.082 2024/11/20 05:46:42 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:35:23.082 request: 00:35:23.082 { 00:35:23.082 "method": "nvmf_subsystem_add_host", 00:35:23.082 "params": { 00:35:23.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:23.082 "host": "nqn.2016-06.io.spdk:host1", 00:35:23.082 "psk": "key0" 00:35:23.082 } 00:35:23.082 } 00:35:23.082 Got JSON-RPC error response 00:35:23.082 GoRPCClient: error on JSON-RPC call 00:35:23.082 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:35:23.082 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:23.082 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:23.082 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:23.082 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83899 00:35:23.082 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 83899 ']' 00:35:23.082 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 83899 00:35:23.082 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:23.082 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:23.082 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83899 00:35:23.082 killing process with pid 83899 00:35:23.082 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:23.082 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:23.082 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83899' 00:35:23.082 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 83899 00:35:23.082 05:46:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 83899 00:35:23.340 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.fX5V20kLgP 00:35:23.340 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:35:23.340 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:23.340 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:23.340 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:23.340 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84022 00:35:23.340 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:23.340 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84022 00:35:23.340 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84022 ']' 00:35:23.340 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:23.340 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:23.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:23.340 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:23.340 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:23.340 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:23.340 [2024-11-20 05:46:43.078989] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:23.340 [2024-11-20 05:46:43.079094] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:23.340 [2024-11-20 05:46:43.226585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:23.599 [2024-11-20 05:46:43.268829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:23.599 [2024-11-20 05:46:43.268879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:23.599 [2024-11-20 05:46:43.268889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:23.599 [2024-11-20 05:46:43.268898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:23.599 [2024-11-20 05:46:43.268904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:23.599 [2024-11-20 05:46:43.269185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.166 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:24.166 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:24.166 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:24.166 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:24.166 05:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:24.166 05:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:24.166 05:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.fX5V20kLgP 00:35:24.166 05:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fX5V20kLgP 00:35:24.166 05:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:24.425 [2024-11-20 05:46:44.191380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:24.425 05:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:35:24.682 05:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:35:24.940 [2024-11-20 05:46:44.651468] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:24.940 [2024-11-20 05:46:44.651666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:24.940 05:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:25.207 malloc0 00:35:25.207 05:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:25.207 05:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fX5V20kLgP 00:35:25.465 05:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:35:25.724 05:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:25.724 05:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=84133 00:35:25.724 05:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:25.724 05:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 84133 /var/tmp/bdevperf.sock 00:35:25.724 05:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84133 ']' 00:35:25.724 05:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:25.724 05:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:25.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:25.724 05:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:25.724 05:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:25.724 05:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:25.983 [2024-11-20 05:46:45.671590] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:25.983 [2024-11-20 05:46:45.671672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84133 ] 00:35:25.983 [2024-11-20 05:46:45.820226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.983 [2024-11-20 05:46:45.886264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:26.919 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:26.919 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:26.919 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fX5V20kLgP 00:35:27.176 05:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:35:27.435 [2024-11-20 05:46:47.265603] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:27.435 TLSTESTn1 00:35:27.693 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:35:27.991 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:35:27.991 "subsystems": [ 00:35:27.991 { 00:35:27.991 "subsystem": "keyring", 00:35:27.991 "config": [ 00:35:27.991 { 00:35:27.991 "method": "keyring_file_add_key", 00:35:27.991 "params": { 00:35:27.991 "name": "key0", 00:35:27.991 "path": "/tmp/tmp.fX5V20kLgP" 00:35:27.991 } 00:35:27.991 } 00:35:27.991 ] 00:35:27.991 }, 00:35:27.991 { 00:35:27.991 "subsystem": "iobuf", 00:35:27.991 "config": [ 00:35:27.991 { 00:35:27.991 "method": "iobuf_set_options", 00:35:27.991 "params": { 00:35:27.991 "enable_numa": false, 00:35:27.991 "large_bufsize": 135168, 00:35:27.991 "large_pool_count": 1024, 00:35:27.991 "small_bufsize": 8192, 00:35:27.991 "small_pool_count": 8192 00:35:27.991 } 00:35:27.991 } 00:35:27.991 ] 00:35:27.991 }, 00:35:27.991 { 00:35:27.991 "subsystem": "sock", 00:35:27.991 "config": [ 00:35:27.991 { 00:35:27.991 "method": "sock_set_default_impl", 00:35:27.991 "params": { 00:35:27.991 "impl_name": "posix" 00:35:27.991 } 00:35:27.991 }, 00:35:27.991 { 00:35:27.991 "method": "sock_impl_set_options", 00:35:27.991 "params": { 00:35:27.991 "enable_ktls": false, 00:35:27.991 "enable_placement_id": 0, 00:35:27.991 "enable_quickack": false, 00:35:27.991 "enable_recv_pipe": true, 00:35:27.991 "enable_zerocopy_send_client": false, 00:35:27.991 "enable_zerocopy_send_server": true, 00:35:27.991 "impl_name": "ssl", 00:35:27.991 "recv_buf_size": 4096, 00:35:27.991 "send_buf_size": 4096, 00:35:27.991 "tls_version": 0, 00:35:27.991 "zerocopy_threshold": 0 00:35:27.991 } 00:35:27.991 }, 00:35:27.991 { 00:35:27.991 "method": "sock_impl_set_options", 00:35:27.991 "params": { 00:35:27.991 "enable_ktls": false, 00:35:27.991 "enable_placement_id": 0, 00:35:27.991 "enable_quickack": false, 00:35:27.991 "enable_recv_pipe": true, 00:35:27.991 "enable_zerocopy_send_client": false, 00:35:27.991 "enable_zerocopy_send_server": true, 00:35:27.991 "impl_name": "posix", 00:35:27.991 "recv_buf_size": 2097152, 00:35:27.991 "send_buf_size": 2097152, 00:35:27.991 "tls_version": 0, 00:35:27.991 "zerocopy_threshold": 0 00:35:27.991 } 00:35:27.991 } 00:35:27.991 ] 00:35:27.991 }, 00:35:27.991 { 00:35:27.991 "subsystem": "vmd", 00:35:27.991 "config": [] 00:35:27.991 }, 00:35:27.991 { 00:35:27.991 "subsystem": "accel", 00:35:27.991 "config": [ 00:35:27.991 { 00:35:27.991 "method": "accel_set_options", 00:35:27.991 "params": { 00:35:27.991 "buf_count": 2048, 00:35:27.991 "large_cache_size": 16, 00:35:27.991 "sequence_count": 2048, 00:35:27.991 "small_cache_size": 128, 00:35:27.991 "task_count": 2048 00:35:27.991 } 00:35:27.991 } 00:35:27.991 ] 00:35:27.991 }, 00:35:27.991 { 00:35:27.991 "subsystem": "bdev", 00:35:27.991 "config": [ 00:35:27.991 { 00:35:27.991 "method": "bdev_set_options", 00:35:27.991 "params": { 00:35:27.991 "bdev_auto_examine": true, 00:35:27.991 "bdev_io_cache_size": 256, 00:35:27.991 "bdev_io_pool_size": 65535, 00:35:27.991 "iobuf_large_cache_size": 16, 00:35:27.991 "iobuf_small_cache_size": 128 00:35:27.991 } 00:35:27.991 }, 00:35:27.991 { 00:35:27.991 "method": "bdev_raid_set_options", 00:35:27.991 "params": { 00:35:27.991 "process_max_bandwidth_mb_sec": 0, 00:35:27.991 "process_window_size_kb": 1024 00:35:27.991 } 00:35:27.991 }, 00:35:27.991 { 00:35:27.991 "method": "bdev_iscsi_set_options", 00:35:27.991 "params": { 00:35:27.991 "timeout_sec": 30 00:35:27.991 } 00:35:27.991 }, 00:35:27.991 { 00:35:27.991 "method": "bdev_nvme_set_options", 00:35:27.991 "params": { 00:35:27.991 "action_on_timeout": "none", 00:35:27.991 "allow_accel_sequence": false, 00:35:27.991 "arbitration_burst": 0, 00:35:27.991 "bdev_retry_count": 3, 00:35:27.991 "ctrlr_loss_timeout_sec": 0, 00:35:27.991 "delay_cmd_submit": true, 00:35:27.991 "dhchap_dhgroups": [ 00:35:27.991 "null", 00:35:27.991 "ffdhe2048", 00:35:27.991 "ffdhe3072", 00:35:27.991 "ffdhe4096", 00:35:27.992 "ffdhe6144", 00:35:27.992 "ffdhe8192" 00:35:27.992 ], 00:35:27.992 "dhchap_digests": [ 00:35:27.992 "sha256", 00:35:27.992 "sha384", 00:35:27.992 "sha512" 00:35:27.992 ], 00:35:27.992 "disable_auto_failback": false, 00:35:27.992 "fast_io_fail_timeout_sec": 0, 00:35:27.992 "generate_uuids": false, 00:35:27.992 "high_priority_weight": 0, 00:35:27.992 "io_path_stat": false, 00:35:27.992 "io_queue_requests": 0, 00:35:27.992 "keep_alive_timeout_ms": 10000, 00:35:27.992 "low_priority_weight": 0, 00:35:27.992 "medium_priority_weight": 0, 00:35:27.992 "nvme_adminq_poll_period_us": 10000, 00:35:27.992 "nvme_error_stat": false, 00:35:27.992 "nvme_ioq_poll_period_us": 0, 00:35:27.992 "rdma_cm_event_timeout_ms": 0, 00:35:27.992 "rdma_max_cq_size": 0, 00:35:27.992 "rdma_srq_size": 0, 00:35:27.992 "reconnect_delay_sec": 0, 00:35:27.992 "timeout_admin_us": 0, 00:35:27.992 "timeout_us": 0, 00:35:27.992 "transport_ack_timeout": 0, 00:35:27.992 "transport_retry_count": 4, 00:35:27.992 "transport_tos": 0 00:35:27.992 } 00:35:27.992 }, 00:35:27.992 { 00:35:27.992 "method": "bdev_nvme_set_hotplug", 00:35:27.992 "params": { 00:35:27.992 "enable": false, 00:35:27.992 "period_us": 100000 00:35:27.992 } 00:35:27.992 }, 00:35:27.992 { 00:35:27.992 "method": "bdev_malloc_create", 00:35:27.992 "params": { 00:35:27.992 "block_size": 4096, 00:35:27.992 "dif_is_head_of_md": false, 00:35:27.992 "dif_pi_format": 0, 00:35:27.992 "dif_type": 0, 00:35:27.992 "md_size": 0, 00:35:27.992 "name": "malloc0", 00:35:27.992 "num_blocks": 8192, 00:35:27.992 "optimal_io_boundary": 0, 00:35:27.992 "physical_block_size": 4096, 00:35:27.992 "uuid": "873e7099-cd1b-4d19-afe7-7e2468b88c35" 00:35:27.992 } 00:35:27.992 }, 00:35:27.992 { 00:35:27.992 "method": "bdev_wait_for_examine" 00:35:27.992 } 00:35:27.992 ] 00:35:27.992 }, 00:35:27.992 { 00:35:27.992 "subsystem": "nbd", 00:35:27.992 "config": [] 00:35:27.992 }, 00:35:27.992 { 00:35:27.992 "subsystem": "scheduler", 00:35:27.992 "config": [ 00:35:27.992 { 00:35:27.992 "method": "framework_set_scheduler", 00:35:27.992 "params": { 00:35:27.992 "name": "static" 00:35:27.992 } 00:35:27.992 } 00:35:27.992 ] 00:35:27.992 }, 00:35:27.992 { 00:35:27.992 "subsystem": "nvmf", 00:35:27.992 "config": [ 00:35:27.992 { 00:35:27.992 "method": "nvmf_set_config", 00:35:27.992 "params": { 00:35:27.992 "admin_cmd_passthru": { 00:35:27.992 "identify_ctrlr": false 00:35:27.992 }, 00:35:27.992 "dhchap_dhgroups": [ 00:35:27.992 "null", 00:35:27.992 "ffdhe2048", 00:35:27.992 "ffdhe3072", 00:35:27.992 "ffdhe4096", 00:35:27.992 "ffdhe6144", 00:35:27.992 "ffdhe8192" 00:35:27.992 ], 00:35:27.992 "dhchap_digests": [ 00:35:27.992 "sha256", 00:35:27.992 "sha384", 00:35:27.992 "sha512" 00:35:27.992 ], 00:35:27.992 "discovery_filter": "match_any" 00:35:27.992 } 00:35:27.992 }, 00:35:27.992 { 00:35:27.992 "method": "nvmf_set_max_subsystems", 00:35:27.992 "params": { 00:35:27.992 "max_subsystems": 1024 00:35:27.992 } 00:35:27.992 }, 00:35:27.992 { 00:35:27.992 "method": "nvmf_set_crdt", 00:35:27.992 "params": { 00:35:27.992 "crdt1": 0, 00:35:27.992 "crdt2": 0, 00:35:27.992 "crdt3": 0 00:35:27.992 } 00:35:27.992 }, 00:35:27.992 { 00:35:27.992 "method": "nvmf_create_transport", 00:35:27.992 "params": { 00:35:27.992 "abort_timeout_sec": 1, 00:35:27.992 "ack_timeout": 0, 00:35:27.992 "buf_cache_size": 4294967295, 00:35:27.992 "c2h_success": false, 00:35:27.992 "data_wr_pool_size": 0, 00:35:27.992 "dif_insert_or_strip": false, 00:35:27.992 "in_capsule_data_size": 4096, 00:35:27.992 "io_unit_size": 131072, 00:35:27.992 "max_aq_depth": 128, 00:35:27.992 "max_io_qpairs_per_ctrlr": 127, 00:35:27.992 "max_io_size": 131072, 00:35:27.992 "max_queue_depth": 128, 00:35:27.992 "num_shared_buffers": 511, 00:35:27.992 "sock_priority": 0, 00:35:27.992 "trtype": "TCP", 00:35:27.992 "zcopy": false 00:35:27.992 } 00:35:27.992 }, 00:35:27.992 { 00:35:27.992 "method": "nvmf_create_subsystem", 00:35:27.992 "params": { 00:35:27.992 "allow_any_host": false, 00:35:27.992 "ana_reporting": false, 00:35:27.992 "max_cntlid": 65519, 00:35:27.992 "max_namespaces": 10, 00:35:27.992 "min_cntlid": 1, 00:35:27.992 "model_number": "SPDK bdev Controller", 00:35:27.992 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:27.992 "serial_number": "SPDK00000000000001" 00:35:27.992 } 00:35:27.992 }, 00:35:27.992 { 00:35:27.992 "method": "nvmf_subsystem_add_host", 00:35:27.992 "params": { 00:35:27.992 "host": "nqn.2016-06.io.spdk:host1", 00:35:27.992 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:27.992 "psk": "key0" 00:35:27.992 } 00:35:27.992 }, 00:35:27.992 { 00:35:27.992 "method": "nvmf_subsystem_add_ns", 00:35:27.992 "params": { 00:35:27.992 "namespace": { 00:35:27.992 "bdev_name": "malloc0", 00:35:27.992 "nguid": "873E7099CD1B4D19AFE77E2468B88C35", 00:35:27.992 "no_auto_visible": false, 00:35:27.992 "nsid": 1, 00:35:27.992 "uuid": "873e7099-cd1b-4d19-afe7-7e2468b88c35" 00:35:27.992 }, 00:35:27.992 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:35:27.992 } 00:35:27.992 }, 00:35:27.992 { 00:35:27.992 "method": "nvmf_subsystem_add_listener", 00:35:27.992 "params": { 00:35:27.992 "listen_address": { 00:35:27.992 "adrfam": "IPv4", 00:35:27.992 "traddr": "10.0.0.3", 00:35:27.992 "trsvcid": "4420", 00:35:27.992 "trtype": "TCP" 00:35:27.992 }, 00:35:27.992 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:27.992 "secure_channel": true 00:35:27.992 } 00:35:27.992 } 00:35:27.992 ] 00:35:27.992 } 00:35:27.992 ] 00:35:27.992 }' 00:35:27.992 05:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:35:28.262 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:35:28.262 "subsystems": [ 00:35:28.262 { 00:35:28.262 "subsystem": "keyring", 00:35:28.262 "config": [ 00:35:28.262 { 00:35:28.262 "method": "keyring_file_add_key", 00:35:28.262 "params": { 00:35:28.262 "name": "key0", 00:35:28.262 "path": "/tmp/tmp.fX5V20kLgP" 00:35:28.262 } 00:35:28.262 } 00:35:28.262 ] 00:35:28.262 }, 00:35:28.262 { 00:35:28.262 "subsystem": "iobuf", 00:35:28.262 "config": [ 00:35:28.262 { 00:35:28.262 "method": "iobuf_set_options", 00:35:28.262 "params": { 00:35:28.262 "enable_numa": false, 00:35:28.262 "large_bufsize": 135168, 00:35:28.262 "large_pool_count": 1024, 00:35:28.262 "small_bufsize": 8192, 00:35:28.262 "small_pool_count": 8192 00:35:28.262 } 00:35:28.262 } 00:35:28.262 ] 00:35:28.262 }, 00:35:28.262 { 00:35:28.262 "subsystem": "sock", 00:35:28.262 "config": [ 00:35:28.262 { 00:35:28.262 "method": "sock_set_default_impl", 00:35:28.262 "params": { 00:35:28.262 "impl_name": "posix" 00:35:28.262 } 00:35:28.262 }, 00:35:28.262 { 00:35:28.262 "method": "sock_impl_set_options", 00:35:28.262 "params": { 00:35:28.262 "enable_ktls": false, 00:35:28.262 "enable_placement_id": 0, 00:35:28.262 "enable_quickack": false, 00:35:28.262 "enable_recv_pipe": true, 00:35:28.262 "enable_zerocopy_send_client": false, 00:35:28.262 "enable_zerocopy_send_server": true, 00:35:28.262 "impl_name": "ssl", 00:35:28.262 "recv_buf_size": 4096, 00:35:28.262 "send_buf_size": 4096, 00:35:28.262 "tls_version": 0, 00:35:28.262 "zerocopy_threshold": 0 00:35:28.262 } 00:35:28.262 }, 00:35:28.262 { 00:35:28.262 "method": "sock_impl_set_options", 00:35:28.262 "params": { 00:35:28.262 "enable_ktls": false, 00:35:28.262 "enable_placement_id": 0, 00:35:28.262 "enable_quickack": false, 00:35:28.262 "enable_recv_pipe": true, 00:35:28.262 "enable_zerocopy_send_client": false, 00:35:28.262 "enable_zerocopy_send_server": true, 00:35:28.262 "impl_name": "posix", 00:35:28.262 "recv_buf_size": 2097152, 00:35:28.262 "send_buf_size": 2097152, 00:35:28.262 "tls_version": 0, 00:35:28.262 "zerocopy_threshold": 0 00:35:28.262 } 00:35:28.262 } 00:35:28.262 ] 00:35:28.262 }, 00:35:28.262 { 00:35:28.262 "subsystem": "vmd", 00:35:28.262 "config": [] 00:35:28.262 }, 00:35:28.262 { 00:35:28.263 "subsystem": "accel", 00:35:28.263 "config": [ 00:35:28.263 { 00:35:28.263 "method": "accel_set_options", 00:35:28.263 "params": { 00:35:28.263 "buf_count": 2048, 00:35:28.263 "large_cache_size": 16, 00:35:28.263 "sequence_count": 2048, 00:35:28.263 "small_cache_size": 128, 00:35:28.263 "task_count": 2048 00:35:28.263 } 00:35:28.263 } 00:35:28.263 ] 00:35:28.263 }, 00:35:28.263 { 00:35:28.263 "subsystem": "bdev", 00:35:28.263 "config": [ 00:35:28.263 { 00:35:28.263 "method": "bdev_set_options", 00:35:28.263 "params": { 00:35:28.263 "bdev_auto_examine": true, 00:35:28.263 "bdev_io_cache_size": 256, 00:35:28.263 "bdev_io_pool_size": 65535, 00:35:28.263 "iobuf_large_cache_size": 16, 00:35:28.263 "iobuf_small_cache_size": 128 00:35:28.263 } 00:35:28.263 }, 00:35:28.263 { 00:35:28.263 "method": "bdev_raid_set_options", 00:35:28.263 "params": { 00:35:28.263 "process_max_bandwidth_mb_sec": 0, 00:35:28.263 "process_window_size_kb": 1024 00:35:28.263 } 00:35:28.263 }, 00:35:28.263 { 00:35:28.263 "method": "bdev_iscsi_set_options", 00:35:28.263 "params": { 00:35:28.263 "timeout_sec": 30 00:35:28.263 } 00:35:28.263 }, 00:35:28.263 { 00:35:28.263 "method": "bdev_nvme_set_options", 00:35:28.263 "params": { 00:35:28.263 "action_on_timeout": "none", 00:35:28.263 "allow_accel_sequence": false, 00:35:28.263 "arbitration_burst": 0, 00:35:28.263 "bdev_retry_count": 3, 00:35:28.263 "ctrlr_loss_timeout_sec": 0, 00:35:28.263 "delay_cmd_submit": true, 00:35:28.263 "dhchap_dhgroups": [ 00:35:28.263 "null", 00:35:28.263 "ffdhe2048", 00:35:28.263 "ffdhe3072", 00:35:28.263 "ffdhe4096", 00:35:28.263 "ffdhe6144", 00:35:28.263 "ffdhe8192" 00:35:28.263 ], 00:35:28.263 "dhchap_digests": [ 00:35:28.263 "sha256", 00:35:28.263 "sha384", 00:35:28.263 "sha512" 00:35:28.263 ], 00:35:28.263 "disable_auto_failback": false, 00:35:28.263 "fast_io_fail_timeout_sec": 0, 00:35:28.263 "generate_uuids": false, 00:35:28.263 "high_priority_weight": 0, 00:35:28.263 "io_path_stat": false, 00:35:28.263 "io_queue_requests": 512, 00:35:28.263 "keep_alive_timeout_ms": 10000, 00:35:28.263 "low_priority_weight": 0, 00:35:28.263 "medium_priority_weight": 0, 00:35:28.263 "nvme_adminq_poll_period_us": 10000, 00:35:28.263 "nvme_error_stat": false, 00:35:28.263 "nvme_ioq_poll_period_us": 0, 00:35:28.263 "rdma_cm_event_timeout_ms": 0, 00:35:28.263 "rdma_max_cq_size": 0, 00:35:28.263 "rdma_srq_size": 0, 00:35:28.263 "reconnect_delay_sec": 0, 00:35:28.263 "timeout_admin_us": 0, 00:35:28.263 "timeout_us": 0, 00:35:28.263 "transport_ack_timeout": 0, 00:35:28.263 "transport_retry_count": 4, 00:35:28.263 "transport_tos": 0 00:35:28.263 } 00:35:28.263 }, 00:35:28.263 { 00:35:28.263 "method": "bdev_nvme_attach_controller", 00:35:28.263 "params": { 00:35:28.263 "adrfam": "IPv4", 00:35:28.263 "ctrlr_loss_timeout_sec": 0, 00:35:28.263 "ddgst": false, 00:35:28.263 "fast_io_fail_timeout_sec": 0, 00:35:28.263 "hdgst": false, 00:35:28.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:28.263 "multipath": "multipath", 00:35:28.263 "name": "TLSTEST", 00:35:28.263 "prchk_guard": false, 00:35:28.263 "prchk_reftag": false, 00:35:28.263 "psk": "key0", 00:35:28.263 "reconnect_delay_sec": 0, 00:35:28.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:28.263 "traddr": "10.0.0.3", 00:35:28.263 "trsvcid": "4420", 00:35:28.263 "trtype": "TCP" 00:35:28.263 } 00:35:28.263 }, 00:35:28.263 { 00:35:28.263 "method": "bdev_nvme_set_hotplug", 00:35:28.263 "params": { 00:35:28.263 "enable": false, 00:35:28.263 "period_us": 100000 00:35:28.263 } 00:35:28.263 }, 00:35:28.263 { 00:35:28.263 "method": "bdev_wait_for_examine" 00:35:28.263 } 00:35:28.263 ] 00:35:28.263 }, 00:35:28.263 { 00:35:28.263 "subsystem": "nbd", 00:35:28.263 "config": [] 00:35:28.263 } 00:35:28.263 ] 00:35:28.263 }' 00:35:28.263 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 84133 00:35:28.263 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84133 ']' 00:35:28.263 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84133 00:35:28.263 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:28.263 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:28.263 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84133 00:35:28.263 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:35:28.263 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:35:28.263 killing process with pid 84133 00:35:28.263 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84133' 00:35:28.263 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84133 00:35:28.263 Received shutdown signal, test time was about 10.000000 seconds 00:35:28.263 00:35:28.263 Latency(us) 00:35:28.263 [2024-11-20T05:46:48.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.263 [2024-11-20T05:46:48.182Z] =================================================================================================================== 00:35:28.263 [2024-11-20T05:46:48.182Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:28.263 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84133 00:35:28.535 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 84022 00:35:28.535 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84022 ']' 00:35:28.535 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84022 00:35:28.535 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:28.535 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:28.535 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84022 00:35:28.535 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:28.535 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:28.535 killing process with pid 84022 00:35:28.535 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84022' 00:35:28.535 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84022 00:35:28.535 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84022 00:35:28.794 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:35:28.794 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:28.794 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:28.794 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:28.794 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:35:28.794 "subsystems": [ 00:35:28.794 { 00:35:28.794 "subsystem": "keyring", 00:35:28.794 "config": [ 00:35:28.794 { 00:35:28.794 "method": "keyring_file_add_key", 00:35:28.794 "params": { 00:35:28.794 "name": "key0", 00:35:28.794 "path": "/tmp/tmp.fX5V20kLgP" 00:35:28.794 } 00:35:28.794 } 00:35:28.794 ] 00:35:28.794 }, 00:35:28.794 { 00:35:28.794 "subsystem": "iobuf", 00:35:28.794 "config": [ 00:35:28.794 { 00:35:28.794 "method": "iobuf_set_options", 00:35:28.794 "params": { 00:35:28.794 "enable_numa": false, 00:35:28.794 "large_bufsize": 135168, 00:35:28.794 "large_pool_count": 1024, 00:35:28.794 "small_bufsize": 8192, 00:35:28.794 "small_pool_count": 8192 00:35:28.794 } 00:35:28.794 } 00:35:28.794 ] 00:35:28.794 }, 00:35:28.794 { 00:35:28.794 "subsystem": "sock", 00:35:28.794 "config": [ 00:35:28.794 { 00:35:28.794 "method": "sock_set_default_impl", 00:35:28.794 "params": { 00:35:28.794 "impl_name": "posix" 00:35:28.794 } 00:35:28.794 }, 00:35:28.794 { 00:35:28.794 "method": "sock_impl_set_options", 00:35:28.794 "params": { 00:35:28.794 "enable_ktls": false, 00:35:28.794 "enable_placement_id": 0, 00:35:28.794 "enable_quickack": false, 00:35:28.794 "enable_recv_pipe": true, 00:35:28.794 "enable_zerocopy_send_client": false, 00:35:28.794 "enable_zerocopy_send_server": true, 00:35:28.794 "impl_name": "ssl", 00:35:28.794 "recv_buf_size": 4096, 00:35:28.794 "send_buf_size": 4096, 00:35:28.794 "tls_version": 0, 00:35:28.794 "zerocopy_threshold": 0 00:35:28.794 } 00:35:28.794 }, 00:35:28.794 { 00:35:28.794 "method": "sock_impl_set_options", 00:35:28.794 "params": { 00:35:28.794 "enable_ktls": false, 00:35:28.794 "enable_placement_id": 0, 00:35:28.794 "enable_quickack": false, 00:35:28.794 "enable_recv_pipe": true, 00:35:28.794 "enable_zerocopy_send_client": false, 00:35:28.794 "enable_zerocopy_send_server": true, 00:35:28.795 "impl_name": "posix", 00:35:28.795 "recv_buf_size": 2097152, 00:35:28.795 "send_buf_size": 2097152, 00:35:28.795 "tls_version": 0, 00:35:28.795 "zerocopy_threshold": 0 00:35:28.795 } 00:35:28.795 } 00:35:28.795 ] 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "subsystem": "vmd", 00:35:28.795 "config": [] 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "subsystem": "accel", 00:35:28.795 "config": [ 00:35:28.795 { 00:35:28.795 "method": "accel_set_options", 00:35:28.795 "params": { 00:35:28.795 "buf_count": 2048, 00:35:28.795 "large_cache_size": 16, 00:35:28.795 "sequence_count": 2048, 00:35:28.795 "small_cache_size": 128, 00:35:28.795 "task_count": 2048 00:35:28.795 } 00:35:28.795 } 00:35:28.795 ] 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "subsystem": "bdev", 00:35:28.795 "config": [ 00:35:28.795 { 00:35:28.795 "method": "bdev_set_options", 00:35:28.795 "params": { 00:35:28.795 "bdev_auto_examine": true, 00:35:28.795 "bdev_io_cache_size": 256, 00:35:28.795 "bdev_io_pool_size": 65535, 00:35:28.795 "iobuf_large_cache_size": 16, 00:35:28.795 "iobuf_small_cache_size": 128 00:35:28.795 } 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "method": "bdev_raid_set_options", 00:35:28.795 "params": { 00:35:28.795 "process_max_bandwidth_mb_sec": 0, 00:35:28.795 "process_window_size_kb": 1024 00:35:28.795 } 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "method": "bdev_iscsi_set_options", 00:35:28.795 "params": { 00:35:28.795 "timeout_sec": 30 00:35:28.795 } 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "method": "bdev_nvme_set_options", 00:35:28.795 "params": { 00:35:28.795 "action_on_timeout": "none", 00:35:28.795 "allow_accel_sequence": false, 00:35:28.795 "arbitration_burst": 0, 00:35:28.795 "bdev_retry_count": 3, 00:35:28.795 "ctrlr_loss_timeout_sec": 0, 00:35:28.795 "delay_cmd_submit": true, 00:35:28.795 "dhchap_dhgroups": [ 00:35:28.795 "null", 00:35:28.795 "ffdhe2048", 00:35:28.795 "ffdhe3072", 00:35:28.795 "ffdhe4096", 00:35:28.795 "ffdhe6144", 00:35:28.795 "ffdhe8192" 00:35:28.795 ], 00:35:28.795 "dhchap_digests": [ 00:35:28.795 "sha256", 00:35:28.795 "sha384", 00:35:28.795 "sha512" 00:35:28.795 ], 00:35:28.795 "disable_auto_failback": false, 00:35:28.795 "fast_io_fail_timeout_sec": 0, 00:35:28.795 "generate_uuids": false, 00:35:28.795 "high_priority_weight": 0, 00:35:28.795 "io_path_stat": false, 00:35:28.795 "io_queue_requests": 0, 00:35:28.795 "keep_alive_timeout_ms": 10000, 00:35:28.795 "low_priority_weight": 0, 00:35:28.795 "medium_priority_weight": 0, 00:35:28.795 "nvme_adminq_poll_period_us": 10000, 00:35:28.795 "nvme_error_stat": false, 00:35:28.795 "nvme_ioq_poll_period_us": 0, 00:35:28.795 "rdma_cm_event_timeout_ms": 0, 00:35:28.795 "rdma_max_cq_size": 0, 00:35:28.795 "rdma_srq_size": 0, 00:35:28.795 "reconnect_delay_sec": 0, 00:35:28.795 "timeout_admin_us": 0, 00:35:28.795 "timeout_us": 0, 00:35:28.795 "transport_ack_timeout": 0, 00:35:28.795 "transport_retry_count": 4, 00:35:28.795 "transport_tos": 0 00:35:28.795 } 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "method": "bdev_nvme_set_hotplug", 00:35:28.795 "params": { 00:35:28.795 "enable": false, 00:35:28.795 "period_us": 100000 00:35:28.795 } 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "method": "bdev_malloc_create", 00:35:28.795 "params": { 00:35:28.795 "block_size": 4096, 00:35:28.795 "dif_is_head_of_md": false, 00:35:28.795 "dif_pi_format": 0, 00:35:28.795 "dif_type": 0, 00:35:28.795 "md_size": 0, 00:35:28.795 "name": "malloc0", 00:35:28.795 "num_blocks": 8192, 00:35:28.795 "optimal_io_boundary": 0, 00:35:28.795 "physical_block_size": 4096, 00:35:28.795 "uuid": "873e7099-cd1b-4d19-afe7-7e2468b88c35" 00:35:28.795 } 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "method": "bdev_wait_for_examine" 00:35:28.795 } 00:35:28.795 ] 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "subsystem": "nbd", 00:35:28.795 "config": [] 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "subsystem": "scheduler", 00:35:28.795 "config": [ 00:35:28.795 { 00:35:28.795 "method": "framework_set_scheduler", 00:35:28.795 "params": { 00:35:28.795 "name": "static" 00:35:28.795 } 00:35:28.795 } 00:35:28.795 ] 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "subsystem": "nvmf", 00:35:28.795 "config": [ 00:35:28.795 { 00:35:28.795 "method": "nvmf_set_config", 00:35:28.795 "params": { 00:35:28.795 "admin_cmd_passthru": { 00:35:28.795 "identify_ctrlr": false 00:35:28.795 }, 00:35:28.795 "dhchap_dhgroups": [ 00:35:28.795 "null", 00:35:28.795 "ffdhe2048", 00:35:28.795 "ffdhe3072", 00:35:28.795 "ffdhe4096", 00:35:28.795 "ffdhe6144", 00:35:28.795 "ffdhe8192" 00:35:28.795 ], 00:35:28.795 "dhchap_digests": [ 00:35:28.795 "sha256", 00:35:28.795 "sha384", 00:35:28.795 "sha512" 00:35:28.795 ], 00:35:28.795 "discovery_filter": "match_any" 00:35:28.795 } 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "method": "nvmf_set_max_subsystems", 00:35:28.795 "params": { 00:35:28.795 "max_subsystems": 1024 00:35:28.795 } 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "method": "nvmf_set_crdt", 00:35:28.795 "params": { 00:35:28.795 "crdt1": 0, 00:35:28.795 "crdt2": 0, 00:35:28.795 "crdt3": 0 00:35:28.795 } 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "method": "nvmf_create_transport", 00:35:28.795 "params": { 00:35:28.795 "abort_timeout_sec": 1, 00:35:28.795 "ack_timeout": 0, 00:35:28.795 "buf_cache_size": 4294967295, 00:35:28.795 "c2h_success": false, 00:35:28.795 "data_wr_pool_size": 0, 00:35:28.795 "dif_insert_or_strip": false, 00:35:28.795 "in_capsule_data_size": 4096, 00:35:28.795 "io_unit_size": 131072, 00:35:28.795 "max_aq_depth": 128, 00:35:28.795 "max_io_qpairs_per_ctrlr": 127, 00:35:28.795 "max_io_size": 131072, 00:35:28.795 "max_queue_depth": 128, 00:35:28.795 "num_shared_buffers": 511, 00:35:28.795 "sock_priority": 0, 00:35:28.795 "trtype": "TCP", 00:35:28.795 "zcopy": false 00:35:28.795 } 00:35:28.795 }, 00:35:28.795 { 00:35:28.795 "method": "nvmf_create_subsystem", 00:35:28.796 "params": { 00:35:28.796 "allow_any_host": false, 00:35:28.796 "ana_reporting": false, 00:35:28.796 "max_cntlid": 65519, 00:35:28.796 "max_namespaces": 10, 00:35:28.796 "min_cntlid": 1, 00:35:28.796 "model_number": "SPDK bdev Controller", 00:35:28.796 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:28.796 "serial_number": "SPDK00000000000001" 00:35:28.796 } 00:35:28.796 }, 00:35:28.796 { 00:35:28.796 "method": "nvmf_subsystem_add_host", 00:35:28.796 "params": { 00:35:28.796 "host": "nqn.2016-06.io.spdk:host1", 00:35:28.796 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:28.796 "psk": "key0" 00:35:28.796 } 00:35:28.796 }, 00:35:28.796 { 00:35:28.796 "method": "nvmf_subsystem_add_ns", 00:35:28.796 "params": { 00:35:28.796 "namespace": { 00:35:28.796 "bdev_name": "malloc0", 00:35:28.796 "nguid": "873E7099CD1B4D19AFE77E2468B88C35", 00:35:28.796 "no_auto_visible": false, 00:35:28.796 "nsid": 1, 00:35:28.796 "uuid": "873e7099-cd1b-4d19-afe7-7e2468b88c35" 00:35:28.796 }, 00:35:28.796 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:35:28.796 } 00:35:28.796 }, 00:35:28.796 { 00:35:28.796 "method": "nvmf_subsystem_add_listener", 00:35:28.796 "params": { 00:35:28.796 "listen_address": { 00:35:28.796 "adrfam": "IPv4", 00:35:28.796 "traddr": "10.0.0.3", 00:35:28.796 "trsvcid": "4420", 00:35:28.796 "trtype": "TCP" 00:35:28.796 }, 00:35:28.796 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:28.796 "secure_channel": true 00:35:28.796 } 00:35:28.796 } 00:35:28.796 ] 00:35:28.796 } 00:35:28.796 ] 00:35:28.796 }' 00:35:28.796 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84218 00:35:28.796 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84218 00:35:28.796 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84218 ']' 00:35:28.796 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:28.796 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:35:28.796 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:28.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:28.796 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:28.796 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:28.796 05:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:28.796 [2024-11-20 05:46:48.595188] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:28.796 [2024-11-20 05:46:48.595289] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:29.053 [2024-11-20 05:46:48.747395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.053 [2024-11-20 05:46:48.794546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:29.053 [2024-11-20 05:46:48.794594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:29.053 [2024-11-20 05:46:48.794604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:29.053 [2024-11-20 05:46:48.794612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:29.053 [2024-11-20 05:46:48.794618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:29.053 [2024-11-20 05:46:48.794944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.311 [2024-11-20 05:46:49.012221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:29.311 [2024-11-20 05:46:49.044190] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:29.311 [2024-11-20 05:46:49.044415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:29.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=84268 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 84268 /var/tmp/bdevperf.sock 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84268 ']' 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:29.879 05:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:35:29.879 "subsystems": [ 00:35:29.879 { 00:35:29.879 "subsystem": "keyring", 00:35:29.879 "config": [ 00:35:29.879 { 00:35:29.879 "method": "keyring_file_add_key", 00:35:29.879 "params": { 00:35:29.879 "name": "key0", 00:35:29.879 "path": "/tmp/tmp.fX5V20kLgP" 00:35:29.879 } 00:35:29.879 } 00:35:29.879 ] 00:35:29.879 }, 00:35:29.879 { 00:35:29.879 "subsystem": "iobuf", 00:35:29.879 "config": [ 00:35:29.879 { 00:35:29.879 "method": "iobuf_set_options", 00:35:29.879 "params": { 00:35:29.879 "enable_numa": false, 00:35:29.879 "large_bufsize": 135168, 00:35:29.879 "large_pool_count": 1024, 00:35:29.879 "small_bufsize": 8192, 00:35:29.879 "small_pool_count": 8192 00:35:29.879 } 00:35:29.879 } 00:35:29.879 ] 00:35:29.879 }, 00:35:29.879 { 00:35:29.879 "subsystem": "sock", 00:35:29.879 "config": [ 00:35:29.879 { 00:35:29.879 "method": "sock_set_default_impl", 00:35:29.879 "params": { 00:35:29.879 "impl_name": "posix" 00:35:29.879 } 00:35:29.879 }, 00:35:29.879 { 00:35:29.879 "method": "sock_impl_set_options", 00:35:29.879 "params": { 00:35:29.879 "enable_ktls": false, 00:35:29.879 "enable_placement_id": 0, 00:35:29.879 "enable_quickack": false, 00:35:29.879 "enable_recv_pipe": true, 00:35:29.879 "enable_zerocopy_send_client": false, 00:35:29.879 "enable_zerocopy_send_server": true, 00:35:29.879 "impl_name": "ssl", 00:35:29.879 "recv_buf_size": 4096, 00:35:29.879 "send_buf_size": 4096, 00:35:29.879 "tls_version": 0, 00:35:29.879 "zerocopy_threshold": 0 00:35:29.879 } 00:35:29.879 }, 00:35:29.879 { 00:35:29.879 "method": "sock_impl_set_options", 00:35:29.879 "params": { 00:35:29.879 "enable_ktls": false, 00:35:29.879 "enable_placement_id": 0, 00:35:29.879 "enable_quickack": false, 00:35:29.879 "enable_recv_pipe": true, 00:35:29.879 "enable_zerocopy_send_client": false, 00:35:29.879 "enable_zerocopy_send_server": true, 00:35:29.879 "impl_name": "posix", 00:35:29.879 "recv_buf_size": 2097152, 00:35:29.879 "send_buf_size": 2097152, 00:35:29.879 "tls_version": 0, 00:35:29.879 "zerocopy_threshold": 0 00:35:29.879 } 00:35:29.879 } 00:35:29.879 ] 00:35:29.879 }, 00:35:29.879 { 00:35:29.879 "subsystem": "vmd", 00:35:29.879 "config": [] 00:35:29.879 }, 00:35:29.879 { 00:35:29.879 "subsystem": "accel", 00:35:29.879 "config": [ 00:35:29.879 { 00:35:29.879 "method": "accel_set_options", 00:35:29.879 "params": { 00:35:29.879 "buf_count": 2048, 00:35:29.879 "large_cache_size": 16, 00:35:29.880 "sequence_count": 2048, 00:35:29.880 "small_cache_size": 128, 00:35:29.880 "task_count": 2048 00:35:29.880 } 00:35:29.880 } 00:35:29.880 ] 00:35:29.880 }, 00:35:29.880 { 00:35:29.880 "subsystem": "bdev", 00:35:29.880 "config": [ 00:35:29.880 { 00:35:29.880 "method": "bdev_set_options", 00:35:29.880 "params": { 00:35:29.880 "bdev_auto_examine": true, 00:35:29.880 "bdev_io_cache_size": 256, 00:35:29.880 "bdev_io_pool_size": 65535, 00:35:29.880 "iobuf_large_cache_size": 16, 00:35:29.880 "iobuf_small_cache_size": 128 00:35:29.880 } 00:35:29.880 }, 00:35:29.880 { 00:35:29.880 "method": "bdev_raid_set_options", 00:35:29.880 "params": { 00:35:29.880 "process_max_bandwidth_mb_sec": 0, 00:35:29.880 "process_window_size_kb": 1024 00:35:29.880 } 00:35:29.880 }, 00:35:29.880 { 00:35:29.880 "method": "bdev_iscsi_set_options", 00:35:29.880 "params": { 00:35:29.880 "timeout_sec": 30 00:35:29.880 } 00:35:29.880 }, 00:35:29.880 { 00:35:29.880 "method": "bdev_nvme_set_options", 00:35:29.880 "params": { 00:35:29.880 "action_on_timeout": "none", 00:35:29.880 "allow_accel_sequence": false, 00:35:29.880 "arbitration_burst": 0, 00:35:29.880 "bdev_retry_count": 3, 00:35:29.880 "ctrlr_loss_timeout_sec": 0, 00:35:29.880 "delay_cmd_submit": true, 00:35:29.880 "dhchap_dhgroups": [ 00:35:29.880 "null", 00:35:29.880 "ffdhe2048", 00:35:29.880 "ffdhe3072", 00:35:29.880 "ffdhe4096", 00:35:29.880 "ffdhe6144", 00:35:29.880 "ffdhe8192" 00:35:29.880 ], 00:35:29.880 "dhchap_digests": [ 00:35:29.880 "sha256", 00:35:29.880 "sha384", 00:35:29.880 "sha512" 00:35:29.880 ], 00:35:29.880 "disable_auto_failback": false, 00:35:29.880 "fast_io_fail_timeout_sec": 0, 00:35:29.880 "generate_uuids": false, 00:35:29.880 "high_priority_weight": 0, 00:35:29.880 "io_path_stat": false, 00:35:29.880 "io_queue_requests": 512, 00:35:29.880 "keep_alive_timeout_ms": 10000, 00:35:29.880 "low_priority_weight": 0, 00:35:29.880 "medium_priority_weight": 0, 00:35:29.880 "nvme_adminq_poll_period_us": 10000, 00:35:29.880 "nvme_error_stat": false, 00:35:29.880 "nvme_ioq_poll_period_us": 0, 00:35:29.880 "rdma_cm_event_timeout_ms": 0, 00:35:29.880 "rdma_max_cq_size": 0, 00:35:29.880 "rdma_srq_size": 0, 00:35:29.880 "reconnect_delay_sec": 0, 00:35:29.880 "timeout_admin_us": 0, 00:35:29.880 "timeout_us": 0, 00:35:29.880 "transport_ack_timeout": 0, 00:35:29.880 "transport_retry_count": 4, 00:35:29.880 "transport_tos": 0 00:35:29.880 } 00:35:29.880 }, 00:35:29.880 { 00:35:29.880 "method": "bdev_nvme_attach_controller", 00:35:29.880 "params": { 00:35:29.880 "adrfam": "IPv4", 00:35:29.880 "ctrlr_loss_timeout_sec": 0, 00:35:29.880 "ddgst": false, 00:35:29.880 "fast_io_fail_timeout_sec": 0, 00:35:29.880 "hdgst": false, 00:35:29.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:29.880 "multipath": "multipath", 00:35:29.880 "name": "TLSTEST", 00:35:29.880 "prchk_guard": false, 00:35:29.880 "prchk_reftag": false, 00:35:29.880 "psk": "key0", 00:35:29.880 "reconnect_delay_sec": 0, 00:35:29.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:29.880 "traddr": "10.0.0.3", 00:35:29.880 "trsvcid": "4420", 00:35:29.880 "trtype": "TCP" 00:35:29.880 } 00:35:29.880 }, 00:35:29.880 { 00:35:29.880 "method": "bdev_nvme_set_hotplug", 00:35:29.880 "params": { 00:35:29.880 "enable": false, 00:35:29.880 "period_us": 100000 00:35:29.880 } 00:35:29.880 }, 00:35:29.880 { 00:35:29.880 "method": "bdev_wait_for_examine" 00:35:29.880 } 00:35:29.880 ] 00:35:29.880 }, 00:35:29.880 { 00:35:29.880 "subsystem": "nbd", 00:35:29.880 "config": [] 00:35:29.880 } 00:35:29.880 ] 00:35:29.880 }' 00:35:29.880 [2024-11-20 05:46:49.758841] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:29.880 [2024-11-20 05:46:49.759169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84268 ] 00:35:30.139 [2024-11-20 05:46:49.911772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.139 [2024-11-20 05:46:49.971463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:30.398 [2024-11-20 05:46:50.130151] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:30.966 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:30.966 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:30.966 05:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:35:31.224 Running I/O for 10 seconds... 00:35:33.093 5023.00 IOPS, 19.62 MiB/s [2024-11-20T05:46:53.945Z] 4679.50 IOPS, 18.28 MiB/s [2024-11-20T05:46:55.322Z] 4756.67 IOPS, 18.58 MiB/s [2024-11-20T05:46:56.258Z] 4828.50 IOPS, 18.86 MiB/s [2024-11-20T05:46:57.195Z] 4966.80 IOPS, 19.40 MiB/s [2024-11-20T05:46:58.131Z] 4922.50 IOPS, 19.23 MiB/s [2024-11-20T05:46:59.066Z] 4952.86 IOPS, 19.35 MiB/s [2024-11-20T05:47:00.002Z] 4959.12 IOPS, 19.37 MiB/s [2024-11-20T05:47:00.936Z] 4984.33 IOPS, 19.47 MiB/s [2024-11-20T05:47:01.194Z] 5002.50 IOPS, 19.54 MiB/s 00:35:41.275 Latency(us) 00:35:41.275 [2024-11-20T05:47:01.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:41.275 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:41.275 Verification LBA range: start 0x0 length 0x2000 00:35:41.275 TLSTESTn1 : 10.01 5008.99 19.57 0.00 0.00 25514.87 4431.48 28711.01 00:35:41.275 [2024-11-20T05:47:01.194Z] =================================================================================================================== 00:35:41.275 [2024-11-20T05:47:01.194Z] Total : 5008.99 19.57 0.00 0.00 25514.87 4431.48 28711.01 00:35:41.275 { 00:35:41.275 "results": [ 00:35:41.275 { 00:35:41.275 "job": "TLSTESTn1", 00:35:41.275 "core_mask": "0x4", 00:35:41.275 "workload": "verify", 00:35:41.275 "status": "finished", 00:35:41.275 "verify_range": { 00:35:41.275 "start": 0, 00:35:41.275 "length": 8192 00:35:41.275 }, 00:35:41.275 "queue_depth": 128, 00:35:41.275 "io_size": 4096, 00:35:41.275 "runtime": 10.012398, 00:35:41.275 "iops": 5008.989854378541, 00:35:41.275 "mibps": 19.566366618666176, 00:35:41.275 "io_failed": 0, 00:35:41.275 "io_timeout": 0, 00:35:41.275 "avg_latency_us": 25514.87131265714, 00:35:41.275 "min_latency_us": 4431.481904761905, 00:35:41.275 "max_latency_us": 28711.009523809524 00:35:41.275 } 00:35:41.275 ], 00:35:41.275 "core_count": 1 00:35:41.275 } 00:35:41.275 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:41.275 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 84268 00:35:41.275 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84268 ']' 00:35:41.275 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84268 00:35:41.275 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:41.275 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:41.275 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84268 00:35:41.275 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:35:41.275 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:35:41.275 killing process with pid 84268 00:35:41.275 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84268' 00:35:41.275 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84268 00:35:41.275 Received shutdown signal, test time was about 10.000000 seconds 00:35:41.275 00:35:41.275 Latency(us) 00:35:41.275 [2024-11-20T05:47:01.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:41.275 [2024-11-20T05:47:01.194Z] =================================================================================================================== 00:35:41.275 [2024-11-20T05:47:01.194Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:41.275 05:47:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84268 00:35:41.275 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 84218 00:35:41.275 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84218 ']' 00:35:41.275 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84218 00:35:41.275 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:41.275 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:41.275 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84218 00:35:41.534 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:41.534 killing process with pid 84218 00:35:41.534 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:41.534 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84218' 00:35:41.534 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84218 00:35:41.534 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84218 00:35:41.793 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:35:41.793 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:41.793 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:41.793 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:41.793 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84419 00:35:41.793 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:41.793 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84419 00:35:41.793 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84419 ']' 00:35:41.793 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:41.793 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:41.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:41.793 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:41.793 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:41.793 05:47:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:41.793 [2024-11-20 05:47:01.569781] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:41.793 [2024-11-20 05:47:01.569887] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:42.051 [2024-11-20 05:47:01.713544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.051 [2024-11-20 05:47:01.761337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:42.051 [2024-11-20 05:47:01.761388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:42.051 [2024-11-20 05:47:01.761398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:42.051 [2024-11-20 05:47:01.761406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:42.051 [2024-11-20 05:47:01.761414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:42.051 [2024-11-20 05:47:01.761709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.618 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:42.618 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:42.618 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:42.618 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:42.618 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:42.618 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:42.618 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.fX5V20kLgP 00:35:42.618 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fX5V20kLgP 00:35:42.618 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:35:43.184 [2024-11-20 05:47:02.834979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:43.184 05:47:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:35:43.442 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:35:43.701 [2024-11-20 05:47:03.375040] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:43.701 [2024-11-20 05:47:03.375250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:43.701 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:35:43.959 malloc0 00:35:43.959 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:35:43.959 05:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fX5V20kLgP 00:35:44.218 05:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:35:44.476 05:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84533 00:35:44.476 05:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:35:44.476 05:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:44.476 05:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84533 /var/tmp/bdevperf.sock 00:35:44.476 05:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84533 ']' 00:35:44.476 05:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:44.476 05:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:44.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:44.476 05:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:44.476 05:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:44.476 05:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:44.476 [2024-11-20 05:47:04.379285] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:44.476 [2024-11-20 05:47:04.379397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84533 ] 00:35:44.734 [2024-11-20 05:47:04.537114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.734 [2024-11-20 05:47:04.628999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.682 05:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:45.682 05:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:45.682 05:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fX5V20kLgP 00:35:45.682 05:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:35:45.941 [2024-11-20 05:47:05.832063] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:46.199 nvme0n1 00:35:46.199 05:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:46.199 Running I/O for 1 seconds... 00:35:47.151 4909.00 IOPS, 19.18 MiB/s 00:35:47.152 Latency(us) 00:35:47.152 [2024-11-20T05:47:07.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.152 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:47.152 Verification LBA range: start 0x0 length 0x2000 00:35:47.152 nvme0n1 : 1.01 4963.78 19.39 0.00 0.00 25565.28 5773.41 18849.40 00:35:47.152 [2024-11-20T05:47:07.071Z] =================================================================================================================== 00:35:47.152 [2024-11-20T05:47:07.071Z] Total : 4963.78 19.39 0.00 0.00 25565.28 5773.41 18849.40 00:35:47.152 { 00:35:47.152 "results": [ 00:35:47.152 { 00:35:47.152 "job": "nvme0n1", 00:35:47.152 "core_mask": "0x2", 00:35:47.152 "workload": "verify", 00:35:47.152 "status": "finished", 00:35:47.152 "verify_range": { 00:35:47.152 "start": 0, 00:35:47.152 "length": 8192 00:35:47.152 }, 00:35:47.152 "queue_depth": 128, 00:35:47.152 "io_size": 4096, 00:35:47.152 "runtime": 1.014952, 00:35:47.152 "iops": 4963.781538437286, 00:35:47.152 "mibps": 19.38977163452065, 00:35:47.152 "io_failed": 0, 00:35:47.152 "io_timeout": 0, 00:35:47.152 "avg_latency_us": 25565.28400158793, 00:35:47.152 "min_latency_us": 5773.409523809524, 00:35:47.152 "max_latency_us": 18849.401904761904 00:35:47.152 } 00:35:47.152 ], 00:35:47.152 "core_count": 1 00:35:47.152 } 00:35:47.152 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84533 00:35:47.152 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84533 ']' 00:35:47.152 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84533 00:35:47.152 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:47.152 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:47.152 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84533 00:35:47.410 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:47.410 killing process with pid 84533 00:35:47.410 Received shutdown signal, test time was about 1.000000 seconds 00:35:47.410 00:35:47.410 Latency(us) 00:35:47.410 [2024-11-20T05:47:07.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.410 [2024-11-20T05:47:07.329Z] =================================================================================================================== 00:35:47.410 [2024-11-20T05:47:07.329Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:47.410 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:47.410 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84533' 00:35:47.410 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84533 00:35:47.410 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84533 00:35:47.668 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84419 00:35:47.668 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84419 ']' 00:35:47.668 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84419 00:35:47.668 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:47.668 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:47.668 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84419 00:35:47.668 killing process with pid 84419 00:35:47.668 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:47.668 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:47.668 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84419' 00:35:47.668 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84419 00:35:47.668 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84419 00:35:47.925 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:35:47.925 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:47.925 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:47.925 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:47.925 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:47.925 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84605 00:35:47.925 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84605 00:35:47.925 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84605 ']' 00:35:47.925 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.925 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:47.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.925 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.925 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:47.925 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:47.925 [2024-11-20 05:47:07.670061] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:47.925 [2024-11-20 05:47:07.670162] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:47.925 [2024-11-20 05:47:07.821285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.182 [2024-11-20 05:47:07.868128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:48.182 [2024-11-20 05:47:07.868176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:48.182 [2024-11-20 05:47:07.868203] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:48.182 [2024-11-20 05:47:07.868212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:48.182 [2024-11-20 05:47:07.868221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:48.182 [2024-11-20 05:47:07.868561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.182 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:48.182 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:48.182 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:48.182 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:48.182 05:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:48.182 05:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:48.182 05:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:35:48.182 05:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.182 05:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:48.182 [2024-11-20 05:47:08.028734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:48.182 malloc0 00:35:48.183 [2024-11-20 05:47:08.057333] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:48.183 [2024-11-20 05:47:08.057524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:48.183 05:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.183 05:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84642 00:35:48.183 05:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84642 /var/tmp/bdevperf.sock 00:35:48.183 05:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:35:48.183 05:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84642 ']' 00:35:48.183 05:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:48.183 05:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:48.183 05:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:48.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:48.183 05:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:48.183 05:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:48.440 [2024-11-20 05:47:08.147113] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:48.440 [2024-11-20 05:47:08.147220] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84642 ] 00:35:48.440 [2024-11-20 05:47:08.303382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.697 [2024-11-20 05:47:08.364648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:49.263 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:49.263 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:49.263 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fX5V20kLgP 00:35:49.521 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:35:50.085 [2024-11-20 05:47:09.729016] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:50.085 nvme0n1 00:35:50.085 05:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:50.085 Running I/O for 1 seconds... 00:35:51.461 5109.00 IOPS, 19.96 MiB/s 00:35:51.461 Latency(us) 00:35:51.461 [2024-11-20T05:47:11.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.461 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:51.461 Verification LBA range: start 0x0 length 0x2000 00:35:51.461 nvme0n1 : 1.01 5164.15 20.17 0.00 0.00 24595.62 4681.14 19598.38 00:35:51.461 [2024-11-20T05:47:11.380Z] =================================================================================================================== 00:35:51.461 [2024-11-20T05:47:11.380Z] Total : 5164.15 20.17 0.00 0.00 24595.62 4681.14 19598.38 00:35:51.461 { 00:35:51.461 "results": [ 00:35:51.461 { 00:35:51.461 "job": "nvme0n1", 00:35:51.461 "core_mask": "0x2", 00:35:51.461 "workload": "verify", 00:35:51.461 "status": "finished", 00:35:51.461 "verify_range": { 00:35:51.461 "start": 0, 00:35:51.461 "length": 8192 00:35:51.461 }, 00:35:51.461 "queue_depth": 128, 00:35:51.461 "io_size": 4096, 00:35:51.461 "runtime": 1.014106, 00:35:51.461 "iops": 5164.1544375045605, 00:35:51.461 "mibps": 20.17247827150219, 00:35:51.461 "io_failed": 0, 00:35:51.461 "io_timeout": 0, 00:35:51.461 "avg_latency_us": 24595.6231983051, 00:35:51.461 "min_latency_us": 4681.142857142857, 00:35:51.461 "max_latency_us": 19598.384761904763 00:35:51.461 } 00:35:51.461 ], 00:35:51.461 "core_count": 1 00:35:51.461 } 00:35:51.461 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:35:51.461 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.461 05:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:51.461 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.461 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:35:51.461 "subsystems": [ 00:35:51.461 { 00:35:51.461 "subsystem": "keyring", 00:35:51.461 "config": [ 00:35:51.461 { 00:35:51.461 "method": "keyring_file_add_key", 00:35:51.461 "params": { 00:35:51.461 "name": "key0", 00:35:51.461 "path": "/tmp/tmp.fX5V20kLgP" 00:35:51.461 } 00:35:51.461 } 00:35:51.461 ] 00:35:51.461 }, 00:35:51.461 { 00:35:51.461 "subsystem": "iobuf", 00:35:51.461 "config": [ 00:35:51.461 { 00:35:51.461 "method": "iobuf_set_options", 00:35:51.461 "params": { 00:35:51.461 "enable_numa": false, 00:35:51.461 "large_bufsize": 135168, 00:35:51.461 "large_pool_count": 1024, 00:35:51.461 "small_bufsize": 8192, 00:35:51.461 "small_pool_count": 8192 00:35:51.461 } 00:35:51.461 } 00:35:51.461 ] 00:35:51.461 }, 00:35:51.461 { 00:35:51.461 "subsystem": "sock", 00:35:51.461 "config": [ 00:35:51.461 { 00:35:51.461 "method": "sock_set_default_impl", 00:35:51.461 "params": { 00:35:51.461 "impl_name": "posix" 00:35:51.461 } 00:35:51.461 }, 00:35:51.461 { 00:35:51.461 "method": "sock_impl_set_options", 00:35:51.461 "params": { 00:35:51.461 "enable_ktls": false, 00:35:51.461 "enable_placement_id": 0, 00:35:51.461 "enable_quickack": false, 00:35:51.461 "enable_recv_pipe": true, 00:35:51.462 "enable_zerocopy_send_client": false, 00:35:51.462 "enable_zerocopy_send_server": true, 00:35:51.462 "impl_name": "ssl", 00:35:51.462 "recv_buf_size": 4096, 00:35:51.462 "send_buf_size": 4096, 00:35:51.462 "tls_version": 0, 00:35:51.462 "zerocopy_threshold": 0 00:35:51.462 } 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "method": "sock_impl_set_options", 00:35:51.462 "params": { 00:35:51.462 "enable_ktls": false, 00:35:51.462 "enable_placement_id": 0, 00:35:51.462 "enable_quickack": false, 00:35:51.462 "enable_recv_pipe": true, 00:35:51.462 "enable_zerocopy_send_client": false, 00:35:51.462 "enable_zerocopy_send_server": true, 00:35:51.462 "impl_name": "posix", 00:35:51.462 "recv_buf_size": 2097152, 00:35:51.462 "send_buf_size": 2097152, 00:35:51.462 "tls_version": 0, 00:35:51.462 "zerocopy_threshold": 0 00:35:51.462 } 00:35:51.462 } 00:35:51.462 ] 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "subsystem": "vmd", 00:35:51.462 "config": [] 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "subsystem": "accel", 00:35:51.462 "config": [ 00:35:51.462 { 00:35:51.462 "method": "accel_set_options", 00:35:51.462 "params": { 00:35:51.462 "buf_count": 2048, 00:35:51.462 "large_cache_size": 16, 00:35:51.462 "sequence_count": 2048, 00:35:51.462 "small_cache_size": 128, 00:35:51.462 "task_count": 2048 00:35:51.462 } 00:35:51.462 } 00:35:51.462 ] 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "subsystem": "bdev", 00:35:51.462 "config": [ 00:35:51.462 { 00:35:51.462 "method": "bdev_set_options", 00:35:51.462 "params": { 00:35:51.462 "bdev_auto_examine": true, 00:35:51.462 "bdev_io_cache_size": 256, 00:35:51.462 "bdev_io_pool_size": 65535, 00:35:51.462 "iobuf_large_cache_size": 16, 00:35:51.462 "iobuf_small_cache_size": 128 00:35:51.462 } 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "method": "bdev_raid_set_options", 00:35:51.462 "params": { 00:35:51.462 "process_max_bandwidth_mb_sec": 0, 00:35:51.462 "process_window_size_kb": 1024 00:35:51.462 } 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "method": "bdev_iscsi_set_options", 00:35:51.462 "params": { 00:35:51.462 "timeout_sec": 30 00:35:51.462 } 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "method": "bdev_nvme_set_options", 00:35:51.462 "params": { 00:35:51.462 "action_on_timeout": "none", 00:35:51.462 "allow_accel_sequence": false, 00:35:51.462 "arbitration_burst": 0, 00:35:51.462 "bdev_retry_count": 3, 00:35:51.462 "ctrlr_loss_timeout_sec": 0, 00:35:51.462 "delay_cmd_submit": true, 00:35:51.462 "dhchap_dhgroups": [ 00:35:51.462 "null", 00:35:51.462 "ffdhe2048", 00:35:51.462 "ffdhe3072", 00:35:51.462 "ffdhe4096", 00:35:51.462 "ffdhe6144", 00:35:51.462 "ffdhe8192" 00:35:51.462 ], 00:35:51.462 "dhchap_digests": [ 00:35:51.462 "sha256", 00:35:51.462 "sha384", 00:35:51.462 "sha512" 00:35:51.462 ], 00:35:51.462 "disable_auto_failback": false, 00:35:51.462 "fast_io_fail_timeout_sec": 0, 00:35:51.462 "generate_uuids": false, 00:35:51.462 "high_priority_weight": 0, 00:35:51.462 "io_path_stat": false, 00:35:51.462 "io_queue_requests": 0, 00:35:51.462 "keep_alive_timeout_ms": 10000, 00:35:51.462 "low_priority_weight": 0, 00:35:51.462 "medium_priority_weight": 0, 00:35:51.462 "nvme_adminq_poll_period_us": 10000, 00:35:51.462 "nvme_error_stat": false, 00:35:51.462 "nvme_ioq_poll_period_us": 0, 00:35:51.462 "rdma_cm_event_timeout_ms": 0, 00:35:51.462 "rdma_max_cq_size": 0, 00:35:51.462 "rdma_srq_size": 0, 00:35:51.462 "reconnect_delay_sec": 0, 00:35:51.462 "timeout_admin_us": 0, 00:35:51.462 "timeout_us": 0, 00:35:51.462 "transport_ack_timeout": 0, 00:35:51.462 "transport_retry_count": 4, 00:35:51.462 "transport_tos": 0 00:35:51.462 } 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "method": "bdev_nvme_set_hotplug", 00:35:51.462 "params": { 00:35:51.462 "enable": false, 00:35:51.462 "period_us": 100000 00:35:51.462 } 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "method": "bdev_malloc_create", 00:35:51.462 "params": { 00:35:51.462 "block_size": 4096, 00:35:51.462 "dif_is_head_of_md": false, 00:35:51.462 "dif_pi_format": 0, 00:35:51.462 "dif_type": 0, 00:35:51.462 "md_size": 0, 00:35:51.462 "name": "malloc0", 00:35:51.462 "num_blocks": 8192, 00:35:51.462 "optimal_io_boundary": 0, 00:35:51.462 "physical_block_size": 4096, 00:35:51.462 "uuid": "886d3d44-cf02-454a-bdba-b58151372e80" 00:35:51.462 } 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "method": "bdev_wait_for_examine" 00:35:51.462 } 00:35:51.462 ] 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "subsystem": "nbd", 00:35:51.462 "config": [] 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "subsystem": "scheduler", 00:35:51.462 "config": [ 00:35:51.462 { 00:35:51.462 "method": "framework_set_scheduler", 00:35:51.462 "params": { 00:35:51.462 "name": "static" 00:35:51.462 } 00:35:51.462 } 00:35:51.462 ] 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "subsystem": "nvmf", 00:35:51.462 "config": [ 00:35:51.462 { 00:35:51.462 "method": "nvmf_set_config", 00:35:51.462 "params": { 00:35:51.462 "admin_cmd_passthru": { 00:35:51.462 "identify_ctrlr": false 00:35:51.462 }, 00:35:51.462 "dhchap_dhgroups": [ 00:35:51.462 "null", 00:35:51.462 "ffdhe2048", 00:35:51.462 "ffdhe3072", 00:35:51.462 "ffdhe4096", 00:35:51.462 "ffdhe6144", 00:35:51.462 "ffdhe8192" 00:35:51.462 ], 00:35:51.462 "dhchap_digests": [ 00:35:51.462 "sha256", 00:35:51.462 "sha384", 00:35:51.462 "sha512" 00:35:51.462 ], 00:35:51.462 "discovery_filter": "match_any" 00:35:51.462 } 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "method": "nvmf_set_max_subsystems", 00:35:51.462 "params": { 00:35:51.462 "max_subsystems": 1024 00:35:51.462 } 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "method": "nvmf_set_crdt", 00:35:51.462 "params": { 00:35:51.462 "crdt1": 0, 00:35:51.462 "crdt2": 0, 00:35:51.462 "crdt3": 0 00:35:51.462 } 00:35:51.462 }, 00:35:51.462 { 00:35:51.462 "method": "nvmf_create_transport", 00:35:51.462 "params": { 00:35:51.462 "abort_timeout_sec": 1, 00:35:51.462 "ack_timeout": 0, 00:35:51.462 "buf_cache_size": 4294967295, 00:35:51.462 "c2h_success": false, 00:35:51.462 "data_wr_pool_size": 0, 00:35:51.462 "dif_insert_or_strip": false, 00:35:51.462 "in_capsule_data_size": 4096, 00:35:51.462 "io_unit_size": 131072, 00:35:51.462 "max_aq_depth": 128, 00:35:51.462 "max_io_qpairs_per_ctrlr": 127, 00:35:51.462 "max_io_size": 131072, 00:35:51.462 "max_queue_depth": 128, 00:35:51.463 "num_shared_buffers": 511, 00:35:51.463 "sock_priority": 0, 00:35:51.463 "trtype": "TCP", 00:35:51.463 "zcopy": false 00:35:51.463 } 00:35:51.463 }, 00:35:51.463 { 00:35:51.463 "method": "nvmf_create_subsystem", 00:35:51.463 "params": { 00:35:51.463 "allow_any_host": false, 00:35:51.463 "ana_reporting": false, 00:35:51.463 "max_cntlid": 65519, 00:35:51.463 "max_namespaces": 32, 00:35:51.463 "min_cntlid": 1, 00:35:51.463 "model_number": "SPDK bdev Controller", 00:35:51.463 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:51.463 "serial_number": "00000000000000000000" 00:35:51.463 } 00:35:51.463 }, 00:35:51.463 { 00:35:51.463 "method": "nvmf_subsystem_add_host", 00:35:51.463 "params": { 00:35:51.463 "host": "nqn.2016-06.io.spdk:host1", 00:35:51.463 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:51.463 "psk": "key0" 00:35:51.463 } 00:35:51.463 }, 00:35:51.463 { 00:35:51.463 "method": "nvmf_subsystem_add_ns", 00:35:51.463 "params": { 00:35:51.463 "namespace": { 00:35:51.463 "bdev_name": "malloc0", 00:35:51.463 "nguid": "886D3D44CF02454ABDBAB58151372E80", 00:35:51.463 "no_auto_visible": false, 00:35:51.463 "nsid": 1, 00:35:51.463 "uuid": "886d3d44-cf02-454a-bdba-b58151372e80" 00:35:51.463 }, 00:35:51.463 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:35:51.463 } 00:35:51.463 }, 00:35:51.463 { 00:35:51.463 "method": "nvmf_subsystem_add_listener", 00:35:51.463 "params": { 00:35:51.463 "listen_address": { 00:35:51.463 "adrfam": "IPv4", 00:35:51.463 "traddr": "10.0.0.3", 00:35:51.463 "trsvcid": "4420", 00:35:51.463 "trtype": "TCP" 00:35:51.463 }, 00:35:51.463 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:51.463 "secure_channel": false, 00:35:51.463 "sock_impl": "ssl" 00:35:51.463 } 00:35:51.463 } 00:35:51.463 ] 00:35:51.463 } 00:35:51.463 ] 00:35:51.463 }' 00:35:51.463 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:35:51.722 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:35:51.722 "subsystems": [ 00:35:51.722 { 00:35:51.722 "subsystem": "keyring", 00:35:51.722 "config": [ 00:35:51.722 { 00:35:51.722 "method": "keyring_file_add_key", 00:35:51.722 "params": { 00:35:51.722 "name": "key0", 00:35:51.722 "path": "/tmp/tmp.fX5V20kLgP" 00:35:51.722 } 00:35:51.722 } 00:35:51.722 ] 00:35:51.722 }, 00:35:51.722 { 00:35:51.722 "subsystem": "iobuf", 00:35:51.722 "config": [ 00:35:51.722 { 00:35:51.722 "method": "iobuf_set_options", 00:35:51.722 "params": { 00:35:51.722 "enable_numa": false, 00:35:51.722 "large_bufsize": 135168, 00:35:51.722 "large_pool_count": 1024, 00:35:51.722 "small_bufsize": 8192, 00:35:51.722 "small_pool_count": 8192 00:35:51.722 } 00:35:51.722 } 00:35:51.722 ] 00:35:51.722 }, 00:35:51.722 { 00:35:51.722 "subsystem": "sock", 00:35:51.722 "config": [ 00:35:51.722 { 00:35:51.722 "method": "sock_set_default_impl", 00:35:51.722 "params": { 00:35:51.722 "impl_name": "posix" 00:35:51.722 } 00:35:51.722 }, 00:35:51.722 { 00:35:51.722 "method": "sock_impl_set_options", 00:35:51.722 "params": { 00:35:51.722 "enable_ktls": false, 00:35:51.722 "enable_placement_id": 0, 00:35:51.723 "enable_quickack": false, 00:35:51.723 "enable_recv_pipe": true, 00:35:51.723 "enable_zerocopy_send_client": false, 00:35:51.723 "enable_zerocopy_send_server": true, 00:35:51.723 "impl_name": "ssl", 00:35:51.723 "recv_buf_size": 4096, 00:35:51.723 "send_buf_size": 4096, 00:35:51.723 "tls_version": 0, 00:35:51.723 "zerocopy_threshold": 0 00:35:51.723 } 00:35:51.723 }, 00:35:51.723 { 00:35:51.723 "method": "sock_impl_set_options", 00:35:51.723 "params": { 00:35:51.723 "enable_ktls": false, 00:35:51.723 "enable_placement_id": 0, 00:35:51.723 "enable_quickack": false, 00:35:51.723 "enable_recv_pipe": true, 00:35:51.723 "enable_zerocopy_send_client": false, 00:35:51.723 "enable_zerocopy_send_server": true, 00:35:51.723 "impl_name": "posix", 00:35:51.723 "recv_buf_size": 2097152, 00:35:51.723 "send_buf_size": 2097152, 00:35:51.723 "tls_version": 0, 00:35:51.723 "zerocopy_threshold": 0 00:35:51.723 } 00:35:51.723 } 00:35:51.723 ] 00:35:51.723 }, 00:35:51.723 { 00:35:51.723 "subsystem": "vmd", 00:35:51.723 "config": [] 00:35:51.723 }, 00:35:51.723 { 00:35:51.723 "subsystem": "accel", 00:35:51.723 "config": [ 00:35:51.723 { 00:35:51.723 "method": "accel_set_options", 00:35:51.723 "params": { 00:35:51.723 "buf_count": 2048, 00:35:51.723 "large_cache_size": 16, 00:35:51.723 "sequence_count": 2048, 00:35:51.723 "small_cache_size": 128, 00:35:51.723 "task_count": 2048 00:35:51.723 } 00:35:51.723 } 00:35:51.723 ] 00:35:51.723 }, 00:35:51.723 { 00:35:51.723 "subsystem": "bdev", 00:35:51.723 "config": [ 00:35:51.723 { 00:35:51.723 "method": "bdev_set_options", 00:35:51.723 "params": { 00:35:51.723 "bdev_auto_examine": true, 00:35:51.723 "bdev_io_cache_size": 256, 00:35:51.723 "bdev_io_pool_size": 65535, 00:35:51.723 "iobuf_large_cache_size": 16, 00:35:51.723 "iobuf_small_cache_size": 128 00:35:51.723 } 00:35:51.723 }, 00:35:51.723 { 00:35:51.723 "method": "bdev_raid_set_options", 00:35:51.723 "params": { 00:35:51.723 "process_max_bandwidth_mb_sec": 0, 00:35:51.723 "process_window_size_kb": 1024 00:35:51.723 } 00:35:51.723 }, 00:35:51.723 { 00:35:51.723 "method": "bdev_iscsi_set_options", 00:35:51.723 "params": { 00:35:51.723 "timeout_sec": 30 00:35:51.723 } 00:35:51.723 }, 00:35:51.723 { 00:35:51.723 "method": "bdev_nvme_set_options", 00:35:51.723 "params": { 00:35:51.723 "action_on_timeout": "none", 00:35:51.723 "allow_accel_sequence": false, 00:35:51.723 "arbitration_burst": 0, 00:35:51.723 "bdev_retry_count": 3, 00:35:51.723 "ctrlr_loss_timeout_sec": 0, 00:35:51.723 "delay_cmd_submit": true, 00:35:51.723 "dhchap_dhgroups": [ 00:35:51.723 "null", 00:35:51.723 "ffdhe2048", 00:35:51.723 "ffdhe3072", 00:35:51.723 "ffdhe4096", 00:35:51.723 "ffdhe6144", 00:35:51.723 "ffdhe8192" 00:35:51.723 ], 00:35:51.723 "dhchap_digests": [ 00:35:51.723 "sha256", 00:35:51.723 "sha384", 00:35:51.723 "sha512" 00:35:51.723 ], 00:35:51.723 "disable_auto_failback": false, 00:35:51.723 "fast_io_fail_timeout_sec": 0, 00:35:51.723 "generate_uuids": false, 00:35:51.723 "high_priority_weight": 0, 00:35:51.723 "io_path_stat": false, 00:35:51.723 "io_queue_requests": 512, 00:35:51.723 "keep_alive_timeout_ms": 10000, 00:35:51.723 "low_priority_weight": 0, 00:35:51.723 "medium_priority_weight": 0, 00:35:51.723 "nvme_adminq_poll_period_us": 10000, 00:35:51.723 "nvme_error_stat": false, 00:35:51.723 "nvme_ioq_poll_period_us": 0, 00:35:51.723 "rdma_cm_event_timeout_ms": 0, 00:35:51.723 "rdma_max_cq_size": 0, 00:35:51.723 "rdma_srq_size": 0, 00:35:51.723 "reconnect_delay_sec": 0, 00:35:51.723 "timeout_admin_us": 0, 00:35:51.723 "timeout_us": 0, 00:35:51.723 "transport_ack_timeout": 0, 00:35:51.723 "transport_retry_count": 4, 00:35:51.723 "transport_tos": 0 00:35:51.723 } 00:35:51.723 }, 00:35:51.723 { 00:35:51.723 "method": "bdev_nvme_attach_controller", 00:35:51.723 "params": { 00:35:51.723 "adrfam": "IPv4", 00:35:51.723 "ctrlr_loss_timeout_sec": 0, 00:35:51.723 "ddgst": false, 00:35:51.723 "fast_io_fail_timeout_sec": 0, 00:35:51.723 "hdgst": false, 00:35:51.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:51.723 "multipath": "multipath", 00:35:51.723 "name": "nvme0", 00:35:51.723 "prchk_guard": false, 00:35:51.723 "prchk_reftag": false, 00:35:51.723 "psk": "key0", 00:35:51.723 "reconnect_delay_sec": 0, 00:35:51.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:51.723 "traddr": "10.0.0.3", 00:35:51.723 "trsvcid": "4420", 00:35:51.723 "trtype": "TCP" 00:35:51.723 } 00:35:51.723 }, 00:35:51.723 { 00:35:51.723 "method": "bdev_nvme_set_hotplug", 00:35:51.723 "params": { 00:35:51.723 "enable": false, 00:35:51.723 "period_us": 100000 00:35:51.723 } 00:35:51.723 }, 00:35:51.723 { 00:35:51.723 "method": "bdev_enable_histogram", 00:35:51.723 "params": { 00:35:51.723 "enable": true, 00:35:51.723 "name": "nvme0n1" 00:35:51.723 } 00:35:51.723 }, 00:35:51.723 { 00:35:51.723 "method": "bdev_wait_for_examine" 00:35:51.723 } 00:35:51.723 ] 00:35:51.723 }, 00:35:51.723 { 00:35:51.723 "subsystem": "nbd", 00:35:51.723 "config": [] 00:35:51.723 } 00:35:51.723 ] 00:35:51.723 }' 00:35:51.723 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84642 00:35:51.723 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84642 ']' 00:35:51.723 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84642 00:35:51.723 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:51.723 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:51.723 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84642 00:35:51.723 killing process with pid 84642 00:35:51.723 Received shutdown signal, test time was about 1.000000 seconds 00:35:51.723 00:35:51.723 Latency(us) 00:35:51.723 [2024-11-20T05:47:11.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.723 [2024-11-20T05:47:11.642Z] =================================================================================================================== 00:35:51.723 [2024-11-20T05:47:11.642Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:51.723 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:51.723 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:51.723 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84642' 00:35:51.723 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84642 00:35:51.723 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84642 00:35:51.982 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84605 00:35:51.982 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84605 ']' 00:35:51.982 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84605 00:35:51.982 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:51.982 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:51.982 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84605 00:35:51.982 killing process with pid 84605 00:35:51.982 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:51.982 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:51.982 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84605' 00:35:51.982 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84605 00:35:51.982 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84605 00:35:52.242 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:35:52.242 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:52.242 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:52.242 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:52.242 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:35:52.242 "subsystems": [ 00:35:52.242 { 00:35:52.242 "subsystem": "keyring", 00:35:52.242 "config": [ 00:35:52.242 { 00:35:52.242 "method": "keyring_file_add_key", 00:35:52.242 "params": { 00:35:52.242 "name": "key0", 00:35:52.242 "path": "/tmp/tmp.fX5V20kLgP" 00:35:52.242 } 00:35:52.242 } 00:35:52.242 ] 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "subsystem": "iobuf", 00:35:52.242 "config": [ 00:35:52.242 { 00:35:52.242 "method": "iobuf_set_options", 00:35:52.242 "params": { 00:35:52.242 "enable_numa": false, 00:35:52.242 "large_bufsize": 135168, 00:35:52.242 "large_pool_count": 1024, 00:35:52.242 "small_bufsize": 8192, 00:35:52.242 "small_pool_count": 8192 00:35:52.242 } 00:35:52.242 } 00:35:52.242 ] 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "subsystem": "sock", 00:35:52.242 "config": [ 00:35:52.242 { 00:35:52.242 "method": "sock_set_default_impl", 00:35:52.242 "params": { 00:35:52.242 "impl_name": "posix" 00:35:52.242 } 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "method": "sock_impl_set_options", 00:35:52.242 "params": { 00:35:52.242 "enable_ktls": false, 00:35:52.242 "enable_placement_id": 0, 00:35:52.242 "enable_quickack": false, 00:35:52.242 "enable_recv_pipe": true, 00:35:52.242 "enable_zerocopy_send_client": false, 00:35:52.242 "enable_zerocopy_send_server": true, 00:35:52.242 "impl_name": "ssl", 00:35:52.242 "recv_buf_size": 4096, 00:35:52.242 "send_buf_size": 4096, 00:35:52.242 "tls_version": 0, 00:35:52.242 "zerocopy_threshold": 0 00:35:52.242 } 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "method": "sock_impl_set_options", 00:35:52.242 "params": { 00:35:52.242 "enable_ktls": false, 00:35:52.242 "enable_placement_id": 0, 00:35:52.242 "enable_quickack": false, 00:35:52.242 "enable_recv_pipe": true, 00:35:52.242 "enable_zerocopy_send_client": false, 00:35:52.242 "enable_zerocopy_send_server": true, 00:35:52.242 "impl_name": "posix", 00:35:52.242 "recv_buf_size": 2097152, 00:35:52.242 "send_buf_size": 2097152, 00:35:52.242 "tls_version": 0, 00:35:52.242 "zerocopy_threshold": 0 00:35:52.242 } 00:35:52.242 } 00:35:52.242 ] 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "subsystem": "vmd", 00:35:52.242 "config": [] 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "subsystem": "accel", 00:35:52.242 "config": [ 00:35:52.242 { 00:35:52.242 "method": "accel_set_options", 00:35:52.242 "params": { 00:35:52.242 "buf_count": 2048, 00:35:52.242 "large_cache_size": 16, 00:35:52.242 "sequence_count": 2048, 00:35:52.242 "small_cache_size": 128, 00:35:52.242 "task_count": 2048 00:35:52.242 } 00:35:52.242 } 00:35:52.242 ] 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "subsystem": "bdev", 00:35:52.242 "config": [ 00:35:52.242 { 00:35:52.242 "method": "bdev_set_options", 00:35:52.242 "params": { 00:35:52.242 "bdev_auto_examine": true, 00:35:52.242 "bdev_io_cache_size": 256, 00:35:52.242 "bdev_io_pool_size": 65535, 00:35:52.242 "iobuf_large_cache_size": 16, 00:35:52.242 "iobuf_small_cache_size": 128 00:35:52.242 } 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "method": "bdev_raid_set_options", 00:35:52.242 "params": { 00:35:52.242 "process_max_bandwidth_mb_sec": 0, 00:35:52.242 "process_window_size_kb": 1024 00:35:52.242 } 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "method": "bdev_iscsi_set_options", 00:35:52.242 "params": { 00:35:52.242 "timeout_sec": 30 00:35:52.242 } 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "method": "bdev_nvme_set_options", 00:35:52.242 "params": { 00:35:52.242 "action_on_timeout": "none", 00:35:52.242 "allow_accel_sequence": false, 00:35:52.242 "arbitration_burst": 0, 00:35:52.242 "bdev_retry_count": 3, 00:35:52.242 "ctrlr_loss_timeout_sec": 0, 00:35:52.242 "delay_cmd_submit": true, 00:35:52.242 "dhchap_dhgroups": [ 00:35:52.242 "null", 00:35:52.242 "ffdhe2048", 00:35:52.242 "ffdhe3072", 00:35:52.242 "ffdhe4096", 00:35:52.242 "ffdhe6144", 00:35:52.242 "ffdhe8192" 00:35:52.242 ], 00:35:52.242 "dhchap_digests": [ 00:35:52.242 "sha256", 00:35:52.242 "sha384", 00:35:52.242 "sha512" 00:35:52.242 ], 00:35:52.242 "disable_auto_failback": false, 00:35:52.242 "fast_io_fail_timeout_sec": 0, 00:35:52.242 "generate_uuids": false, 00:35:52.242 "high_priority_weight": 0, 00:35:52.242 "io_path_stat": false, 00:35:52.242 "io_queue_requests": 0, 00:35:52.242 "keep_alive_timeout_ms": 10000, 00:35:52.242 "low_priority_weight": 0, 00:35:52.242 "medium_priority_weight": 0, 00:35:52.242 "nvme_adminq_poll_period_us": 10000, 00:35:52.242 "nvme_error_stat": false, 00:35:52.242 "nvme_ioq_poll_period_us": 0, 00:35:52.242 "rdma_cm_event_timeout_ms": 0, 00:35:52.242 "rdma_max_cq_size": 0, 00:35:52.242 "rdma_srq_size": 0, 00:35:52.242 "reconnect_delay_sec": 0, 00:35:52.242 "timeout_admin_us": 0, 00:35:52.242 "timeout_us": 0, 00:35:52.242 "transport_ack_timeout": 0, 00:35:52.242 "transport_retry_count": 4, 00:35:52.242 "transport_tos": 0 00:35:52.242 } 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "method": "bdev_nvme_set_hotplug", 00:35:52.242 "params": { 00:35:52.242 "enable": false, 00:35:52.242 "period_us": 100000 00:35:52.242 } 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "method": "bdev_malloc_create", 00:35:52.242 "params": { 00:35:52.242 "block_size": 4096, 00:35:52.242 "dif_is_head_of_md": false, 00:35:52.242 "dif_pi_format": 0, 00:35:52.242 "dif_type": 0, 00:35:52.242 "md_size": 0, 00:35:52.242 "name": "malloc0", 00:35:52.242 "num_blocks": 8192, 00:35:52.242 "optimal_io_boundary": 0, 00:35:52.242 "physical_block_size": 4096, 00:35:52.242 "uuid": "886d3d44-cf02-454a-bdba-b58151372e80" 00:35:52.242 } 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "method": "bdev_wait_for_examine" 00:35:52.242 } 00:35:52.242 ] 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "subsystem": "nbd", 00:35:52.242 "config": [] 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "subsystem": "scheduler", 00:35:52.242 "config": [ 00:35:52.242 { 00:35:52.242 "method": "framework_set_scheduler", 00:35:52.242 "params": { 00:35:52.242 "name": "static" 00:35:52.242 } 00:35:52.242 } 00:35:52.242 ] 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "subsystem": "nvmf", 00:35:52.242 "config": [ 00:35:52.242 { 00:35:52.242 "method": "nvmf_set_config", 00:35:52.242 "params": { 00:35:52.242 "admin_cmd_passthru": { 00:35:52.242 "identify_ctrlr": false 00:35:52.242 }, 00:35:52.242 "dhchap_dhgroups": [ 00:35:52.242 "null", 00:35:52.242 "ffdhe2048", 00:35:52.242 "ffdhe3072", 00:35:52.242 "ffdhe4096", 00:35:52.242 "ffdhe6144", 00:35:52.242 "ffdhe8192" 00:35:52.242 ], 00:35:52.242 "dhchap_digests": [ 00:35:52.242 "sha256", 00:35:52.242 "sha384", 00:35:52.242 "sha512" 00:35:52.242 ], 00:35:52.242 "discovery_filter": "match_any" 00:35:52.242 } 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "method": "nvmf_set_max_subsystems", 00:35:52.242 "params": { 00:35:52.242 "max_subsystems": 1024 00:35:52.242 } 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "method": "nvmf_set_crdt", 00:35:52.242 "params": { 00:35:52.242 "crdt1": 0, 00:35:52.242 "crdt2": 0, 00:35:52.242 "crdt3": 0 00:35:52.242 } 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "method": "nvmf_create_transport", 00:35:52.242 "params": { 00:35:52.242 "abort_timeout_sec": 1, 00:35:52.242 "ack_timeout": 0, 00:35:52.242 "buf_cache_size": 4294967295, 00:35:52.242 "c2h_success": false, 00:35:52.242 "data_wr_pool_size": 0, 00:35:52.242 "dif_insert_or_strip": false, 00:35:52.242 "in_capsule_data_size": 4096, 00:35:52.242 "io_unit_size": 131072, 00:35:52.242 "max_aq_depth": 128, 00:35:52.242 "max_io_qpairs_per_ctrlr": 127, 00:35:52.242 "max_io_size": 131072, 00:35:52.242 "max_queue_depth": 128, 00:35:52.242 "num_shared_buffers": 511, 00:35:52.242 "sock_priority": 0, 00:35:52.242 "trtype": "TCP", 00:35:52.242 "zcopy": false 00:35:52.242 } 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "method": "nvmf_create_subsystem", 00:35:52.242 "params": { 00:35:52.242 "allow_any_host": false, 00:35:52.242 "ana_reporting": false, 00:35:52.242 "max_cntlid": 65519, 00:35:52.242 "max_namespaces": 32, 00:35:52.242 "min_cntlid": 1, 00:35:52.242 "model_number": "SPDK bdev Controller", 00:35:52.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:52.242 "serial_number": "00000000000000000000" 00:35:52.242 } 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "method": "nvmf_subsystem_add_host", 00:35:52.242 "params": { 00:35:52.242 "host": "nqn.2016-06.io.spdk:host1", 00:35:52.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:52.242 "psk": "key0" 00:35:52.242 } 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "method": "nvmf_subsystem_add_ns", 00:35:52.242 "params": { 00:35:52.242 "namespace": { 00:35:52.242 "bdev_name": "malloc0", 00:35:52.242 "nguid": "886D3D44CF02454ABDBAB58151372E80", 00:35:52.242 "no_auto_visible": false, 00:35:52.242 "nsid": 1, 00:35:52.242 "uuid": "886d3d44-cf02-454a-bdba-b58151372e80" 00:35:52.242 }, 00:35:52.242 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:35:52.242 } 00:35:52.242 }, 00:35:52.242 { 00:35:52.242 "method": "nvmf_subsystem_add_listener", 00:35:52.242 "params": { 00:35:52.242 "listen_address": { 00:35:52.242 "adrfam": "IPv4", 00:35:52.242 "traddr": "10.0.0.3", 00:35:52.242 "trsvcid": "4420", 00:35:52.242 "trtype": "TCP" 00:35:52.242 }, 00:35:52.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:52.242 "secure_channel": false, 00:35:52.242 "sock_impl": "ssl" 00:35:52.242 } 00:35:52.242 } 00:35:52.242 ] 00:35:52.242 } 00:35:52.242 ] 00:35:52.242 }' 00:35:52.242 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84733 00:35:52.242 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:35:52.242 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84733 00:35:52.242 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84733 ']' 00:35:52.242 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:52.242 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:52.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:52.243 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:52.243 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:52.243 05:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:52.243 [2024-11-20 05:47:11.982137] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:52.243 [2024-11-20 05:47:11.982218] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:52.243 [2024-11-20 05:47:12.126137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.500 [2024-11-20 05:47:12.179490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:52.500 [2024-11-20 05:47:12.179557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:52.500 [2024-11-20 05:47:12.179568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:52.500 [2024-11-20 05:47:12.179593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:52.500 [2024-11-20 05:47:12.179601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:52.500 [2024-11-20 05:47:12.179946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:52.500 [2024-11-20 05:47:12.403892] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:52.759 [2024-11-20 05:47:12.435863] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:52.759 [2024-11-20 05:47:12.436069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:53.326 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:53.326 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:53.326 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:53.326 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:53.326 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:53.326 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:53.326 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84777 00:35:53.326 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84777 /var/tmp/bdevperf.sock 00:35:53.326 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84777 ']' 00:35:53.326 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:53.326 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:53.326 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:35:53.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:53.326 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:53.326 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:53.326 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:35:53.326 "subsystems": [ 00:35:53.326 { 00:35:53.326 "subsystem": "keyring", 00:35:53.326 "config": [ 00:35:53.326 { 00:35:53.326 "method": "keyring_file_add_key", 00:35:53.326 "params": { 00:35:53.326 "name": "key0", 00:35:53.326 "path": "/tmp/tmp.fX5V20kLgP" 00:35:53.326 } 00:35:53.326 } 00:35:53.326 ] 00:35:53.326 }, 00:35:53.326 { 00:35:53.326 "subsystem": "iobuf", 00:35:53.326 "config": [ 00:35:53.326 { 00:35:53.326 "method": "iobuf_set_options", 00:35:53.326 "params": { 00:35:53.326 "enable_numa": false, 00:35:53.326 "large_bufsize": 135168, 00:35:53.326 "large_pool_count": 1024, 00:35:53.326 "small_bufsize": 8192, 00:35:53.326 "small_pool_count": 8192 00:35:53.326 } 00:35:53.326 } 00:35:53.326 ] 00:35:53.326 }, 00:35:53.326 { 00:35:53.326 "subsystem": "sock", 00:35:53.326 "config": [ 00:35:53.326 { 00:35:53.326 "method": "sock_set_default_impl", 00:35:53.326 "params": { 00:35:53.326 "impl_name": "posix" 00:35:53.326 } 00:35:53.326 }, 00:35:53.326 { 00:35:53.327 "method": "sock_impl_set_options", 00:35:53.327 "params": { 00:35:53.327 "enable_ktls": false, 00:35:53.327 "enable_placement_id": 0, 00:35:53.327 "enable_quickack": false, 00:35:53.327 "enable_recv_pipe": true, 00:35:53.327 "enable_zerocopy_send_client": false, 00:35:53.327 "enable_zerocopy_send_server": true, 00:35:53.327 "impl_name": "ssl", 00:35:53.327 "recv_buf_size": 4096, 00:35:53.327 "send_buf_size": 4096, 00:35:53.327 "tls_version": 0, 00:35:53.327 "zerocopy_threshold": 0 00:35:53.327 } 00:35:53.327 }, 00:35:53.327 { 00:35:53.327 "method": "sock_impl_set_options", 00:35:53.327 "params": { 00:35:53.327 "enable_ktls": false, 00:35:53.327 "enable_placement_id": 0, 00:35:53.327 "enable_quickack": false, 00:35:53.327 "enable_recv_pipe": true, 00:35:53.327 "enable_zerocopy_send_client": false, 00:35:53.327 "enable_zerocopy_send_server": true, 00:35:53.327 "impl_name": "posix", 00:35:53.327 "recv_buf_size": 2097152, 00:35:53.327 "send_buf_size": 2097152, 00:35:53.327 "tls_version": 0, 00:35:53.327 "zerocopy_threshold": 0 00:35:53.327 } 00:35:53.327 } 00:35:53.327 ] 00:35:53.327 }, 00:35:53.327 { 00:35:53.327 "subsystem": "vmd", 00:35:53.327 "config": [] 00:35:53.327 }, 00:35:53.327 { 00:35:53.327 "subsystem": "accel", 00:35:53.327 "config": [ 00:35:53.327 { 00:35:53.327 "method": "accel_set_options", 00:35:53.327 "params": { 00:35:53.327 "buf_count": 2048, 00:35:53.327 "large_cache_size": 16, 00:35:53.327 "sequence_count": 2048, 00:35:53.327 "small_cache_size": 128, 00:35:53.327 "task_count": 2048 00:35:53.327 } 00:35:53.327 } 00:35:53.327 ] 00:35:53.327 }, 00:35:53.327 { 00:35:53.327 "subsystem": "bdev", 00:35:53.327 "config": [ 00:35:53.327 { 00:35:53.327 "method": "bdev_set_options", 00:35:53.327 "params": { 00:35:53.327 "bdev_auto_examine": true, 00:35:53.327 "bdev_io_cache_size": 256, 00:35:53.327 "bdev_io_pool_size": 65535, 00:35:53.327 "iobuf_large_cache_size": 16, 00:35:53.327 "iobuf_small_cache_size": 128 00:35:53.327 } 00:35:53.327 }, 00:35:53.327 { 00:35:53.327 "method": "bdev_raid_set_options", 00:35:53.327 "params": { 00:35:53.327 "process_max_bandwidth_mb_sec": 0, 00:35:53.327 "process_window_size_kb": 1024 00:35:53.327 } 00:35:53.327 }, 00:35:53.327 { 00:35:53.327 "method": "bdev_iscsi_set_options", 00:35:53.327 "params": { 00:35:53.327 "timeout_sec": 30 00:35:53.327 } 00:35:53.327 }, 00:35:53.327 { 00:35:53.327 "method": "bdev_nvme_set_options", 00:35:53.327 "params": { 00:35:53.327 "action_on_timeout": "none", 00:35:53.327 "allow_accel_sequence": false, 00:35:53.327 "arbitration_burst": 0, 00:35:53.327 "bdev_retry_count": 3, 00:35:53.327 "ctrlr_loss_timeout_sec": 0, 00:35:53.327 "delay_cmd_submit": true, 00:35:53.327 "dhchap_dhgroups": [ 00:35:53.327 "null", 00:35:53.327 "ffdhe2048", 00:35:53.327 "ffdhe3072", 00:35:53.327 "ffdhe4096", 00:35:53.327 "ffdhe6144", 00:35:53.327 "ffdhe8192" 00:35:53.327 ], 00:35:53.327 "dhchap_digests": [ 00:35:53.327 "sha256", 00:35:53.327 "sha384", 00:35:53.327 "sha512" 00:35:53.327 ], 00:35:53.327 "disable_auto_failback": false, 00:35:53.327 "fast_io_fail_timeout_sec": 0, 00:35:53.327 "generate_uuids": false, 00:35:53.327 "high_priority_weight": 0, 00:35:53.327 "io_path_stat": false, 00:35:53.327 "io_queue_requests": 512, 00:35:53.327 "keep_alive_timeout_ms": 10000, 00:35:53.327 "low_priority_weight": 0, 00:35:53.327 "medium_priority_weight": 0, 00:35:53.327 "nvme_adminq_poll_period_us": 10000, 00:35:53.327 "nvme_error_stat": false, 00:35:53.327 "nvme_ioq_poll_period_us": 0, 00:35:53.327 "rdma_cm_event_timeout_ms": 0, 00:35:53.327 "rdma_max_cq_size": 0, 00:35:53.327 "rdma_srq_size": 0, 00:35:53.327 "reconnect_delay_sec": 0, 00:35:53.327 "timeout_admin_us": 0, 00:35:53.327 "timeout_us": 0, 00:35:53.327 "transport_ack_timeout": 0, 00:35:53.327 "transport_retry_count": 4, 00:35:53.327 "transport_tos": 0 00:35:53.327 } 00:35:53.327 }, 00:35:53.327 { 00:35:53.327 "method": "bdev_nvme_attach_controller", 00:35:53.327 "params": { 00:35:53.327 "adrfam": "IPv4", 00:35:53.327 "ctrlr_loss_timeout_sec": 0, 00:35:53.327 "ddgst": false, 00:35:53.327 "fast_io_fail_timeout_sec": 0, 00:35:53.327 "hdgst": false, 00:35:53.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:53.327 "multipath": "multipath", 00:35:53.327 "name": "nvme0", 00:35:53.327 "prchk_guard": false, 00:35:53.327 "prchk_reftag": false, 00:35:53.327 "psk": "key0", 00:35:53.327 "reconnect_delay_sec": 0, 00:35:53.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:53.327 "traddr": "10.0.0.3", 00:35:53.327 "trsvcid": "4420", 00:35:53.327 "trtype": "TCP" 00:35:53.327 } 00:35:53.327 }, 00:35:53.327 { 00:35:53.327 "method": "bdev_nvme_set_hotplug", 00:35:53.327 "params": { 00:35:53.327 "enable": false, 00:35:53.327 "period_us": 100000 00:35:53.327 } 00:35:53.327 }, 00:35:53.327 { 00:35:53.327 "method": "bdev_enable_histogram", 00:35:53.327 "params": { 00:35:53.327 "enable": true, 00:35:53.327 "name": "nvme0n1" 00:35:53.327 } 00:35:53.327 }, 00:35:53.327 { 00:35:53.327 "method": "bdev_wait_for_examine" 00:35:53.327 } 00:35:53.327 ] 00:35:53.327 }, 00:35:53.327 { 00:35:53.327 "subsystem": "nbd", 00:35:53.327 "config": [] 00:35:53.327 } 00:35:53.327 ] 00:35:53.327 }' 00:35:53.327 05:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:53.327 [2024-11-20 05:47:13.141210] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:53.327 [2024-11-20 05:47:13.141333] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84777 ] 00:35:53.586 [2024-11-20 05:47:13.299020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.586 [2024-11-20 05:47:13.363521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.844 [2024-11-20 05:47:13.532599] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:54.475 05:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:54.475 05:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:35:54.475 05:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:54.475 05:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:35:54.733 05:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:54.733 05:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:54.733 Running I/O for 1 seconds... 00:35:56.109 4354.00 IOPS, 17.01 MiB/s 00:35:56.109 Latency(us) 00:35:56.109 [2024-11-20T05:47:16.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.109 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:35:56.109 Verification LBA range: start 0x0 length 0x2000 00:35:56.110 nvme0n1 : 1.01 4422.82 17.28 0.00 0.00 28751.28 3963.37 24591.60 00:35:56.110 [2024-11-20T05:47:16.029Z] =================================================================================================================== 00:35:56.110 [2024-11-20T05:47:16.029Z] Total : 4422.82 17.28 0.00 0.00 28751.28 3963.37 24591.60 00:35:56.110 { 00:35:56.110 "results": [ 00:35:56.110 { 00:35:56.110 "job": "nvme0n1", 00:35:56.110 "core_mask": "0x2", 00:35:56.110 "workload": "verify", 00:35:56.110 "status": "finished", 00:35:56.110 "verify_range": { 00:35:56.110 "start": 0, 00:35:56.110 "length": 8192 00:35:56.110 }, 00:35:56.110 "queue_depth": 128, 00:35:56.110 "io_size": 4096, 00:35:56.110 "runtime": 1.01338, 00:35:56.110 "iops": 4422.822633168209, 00:35:56.110 "mibps": 17.276650910813316, 00:35:56.110 "io_failed": 0, 00:35:56.110 "io_timeout": 0, 00:35:56.110 "avg_latency_us": 28751.27758781156, 00:35:56.110 "min_latency_us": 3963.367619047619, 00:35:56.110 "max_latency_us": 24591.60380952381 00:35:56.110 } 00:35:56.110 ], 00:35:56.110 "core_count": 1 00:35:56.110 } 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:35:56.110 nvmf_trace.0 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84777 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84777 ']' 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84777 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84777 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:56.110 killing process with pid 84777 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84777' 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84777 00:35:56.110 Received shutdown signal, test time was about 1.000000 seconds 00:35:56.110 00:35:56.110 Latency(us) 00:35:56.110 [2024-11-20T05:47:16.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.110 [2024-11-20T05:47:16.029Z] =================================================================================================================== 00:35:56.110 [2024-11-20T05:47:16.029Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84777 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:56.110 05:47:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:35:56.110 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:56.110 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:35:56.110 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:56.110 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:56.368 rmmod nvme_tcp 00:35:56.368 rmmod nvme_fabrics 00:35:56.368 rmmod nvme_keyring 00:35:56.368 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:56.368 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:35:56.368 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:35:56.368 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 84733 ']' 00:35:56.368 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 84733 00:35:56.368 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84733 ']' 00:35:56.368 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84733 00:35:56.368 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:35:56.368 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:56.368 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84733 00:35:56.368 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:56.368 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:56.368 killing process with pid 84733 00:35:56.368 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84733' 00:35:56.368 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84733 00:35:56.368 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84733 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:35:56.627 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:56.885 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:56.885 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:35:56.885 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.885 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:56.885 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.885 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:35:56.885 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Ey90tFgkn9 /tmp/tmp.8WyP6yRnPi /tmp/tmp.fX5V20kLgP 00:35:56.885 00:35:56.885 real 1m28.656s 00:35:56.885 user 2m18.350s 00:35:56.885 sys 0m32.269s 00:35:56.885 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:56.886 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:35:56.886 ************************************ 00:35:56.886 END TEST nvmf_tls 00:35:56.886 ************************************ 00:35:56.886 05:47:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:35:56.886 05:47:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:56.886 05:47:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:56.886 05:47:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:35:56.886 ************************************ 00:35:56.886 START TEST nvmf_fips 00:35:56.886 ************************************ 00:35:56.886 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:35:56.886 * Looking for test storage... 00:35:56.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:35:56.886 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:56.886 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:35:56.886 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:57.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.145 --rc genhtml_branch_coverage=1 00:35:57.145 --rc genhtml_function_coverage=1 00:35:57.145 --rc genhtml_legend=1 00:35:57.145 --rc geninfo_all_blocks=1 00:35:57.145 --rc geninfo_unexecuted_blocks=1 00:35:57.145 00:35:57.145 ' 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:57.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.145 --rc genhtml_branch_coverage=1 00:35:57.145 --rc genhtml_function_coverage=1 00:35:57.145 --rc genhtml_legend=1 00:35:57.145 --rc geninfo_all_blocks=1 00:35:57.145 --rc geninfo_unexecuted_blocks=1 00:35:57.145 00:35:57.145 ' 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:57.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.145 --rc genhtml_branch_coverage=1 00:35:57.145 --rc genhtml_function_coverage=1 00:35:57.145 --rc genhtml_legend=1 00:35:57.145 --rc geninfo_all_blocks=1 00:35:57.145 --rc geninfo_unexecuted_blocks=1 00:35:57.145 00:35:57.145 ' 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:57.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.145 --rc genhtml_branch_coverage=1 00:35:57.145 --rc genhtml_function_coverage=1 00:35:57.145 --rc genhtml_legend=1 00:35:57.145 --rc geninfo_all_blocks=1 00:35:57.145 --rc geninfo_unexecuted_blocks=1 00:35:57.145 00:35:57.145 ' 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:57.145 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:57.146 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:35:57.146 05:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:35:57.146 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:35:57.146 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:35:57.146 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:35:57.146 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:35:57.146 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:35:57.146 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:35:57.146 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:35:57.146 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:35:57.146 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:35:57.146 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:35:57.146 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:35:57.146 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:35:57.146 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:35:57.406 Error setting digest 00:35:57.406 40728E40AD7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:35:57.406 40728E40AD7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:57.406 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:35:57.407 Cannot find device "nvmf_init_br" 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:35:57.407 Cannot find device "nvmf_init_br2" 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:35:57.407 Cannot find device "nvmf_tgt_br" 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:35:57.407 Cannot find device "nvmf_tgt_br2" 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:35:57.407 Cannot find device "nvmf_init_br" 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:35:57.407 Cannot find device "nvmf_init_br2" 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:35:57.407 Cannot find device "nvmf_tgt_br" 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:35:57.407 Cannot find device "nvmf_tgt_br2" 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:35:57.407 Cannot find device "nvmf_br" 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:35:57.407 Cannot find device "nvmf_init_if" 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:35:57.407 Cannot find device "nvmf_init_if2" 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:57.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:57.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:35:57.407 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:57.665 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:35:57.666 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:57.666 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:35:57.666 00:35:57.666 --- 10.0.0.3 ping statistics --- 00:35:57.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.666 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:35:57.666 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:57.666 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:35:57.666 00:35:57.666 --- 10.0.0.4 ping statistics --- 00:35:57.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.666 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:57.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:57.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:35:57.666 00:35:57.666 --- 10.0.0.1 ping statistics --- 00:35:57.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.666 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:35:57.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:57.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:35:57.666 00:35:57.666 --- 10.0.0.2 ping statistics --- 00:35:57.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.666 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:57.666 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:57.924 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:35:57.924 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:57.924 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:57.924 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:35:57.924 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=85123 00:35:57.924 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 85123 00:35:57.924 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 85123 ']' 00:35:57.924 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:57.924 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:57.924 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:57.924 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:57.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:57.924 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:57.924 05:47:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:35:57.924 [2024-11-20 05:47:17.686722] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:57.924 [2024-11-20 05:47:17.686820] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:58.183 [2024-11-20 05:47:17.846959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:58.183 [2024-11-20 05:47:17.907438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:58.183 [2024-11-20 05:47:17.907511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:58.183 [2024-11-20 05:47:17.907526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:58.183 [2024-11-20 05:47:17.907539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:58.183 [2024-11-20 05:47:17.907551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:58.183 [2024-11-20 05:47:17.907910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:59.118 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:59.118 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:35:59.118 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:59.118 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:59.118 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:35:59.118 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:59.118 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:35:59.118 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:35:59.118 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:35:59.118 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.m2t 00:35:59.118 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:35:59.118 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.m2t 00:35:59.118 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.m2t 00:35:59.118 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.m2t 00:35:59.118 05:47:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:59.376 [2024-11-20 05:47:19.062127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:59.376 [2024-11-20 05:47:19.078089] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:59.376 [2024-11-20 05:47:19.078282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:59.376 malloc0 00:35:59.376 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:59.376 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=85177 00:35:59.376 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:35:59.376 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 85177 /var/tmp/bdevperf.sock 00:35:59.376 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 85177 ']' 00:35:59.376 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:59.376 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:59.376 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:59.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:59.376 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:59.376 05:47:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:35:59.376 [2024-11-20 05:47:19.230644] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:59.376 [2024-11-20 05:47:19.230739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85177 ] 00:35:59.634 [2024-11-20 05:47:19.384184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.634 [2024-11-20 05:47:19.450299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:00.566 05:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:00.566 05:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:36:00.566 05:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.m2t 00:36:00.825 05:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:36:01.083 [2024-11-20 05:47:20.805358] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:01.083 TLSTESTn1 00:36:01.083 05:47:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:01.341 Running I/O for 10 seconds... 00:36:03.212 4926.00 IOPS, 19.24 MiB/s [2024-11-20T05:47:24.105Z] 5120.50 IOPS, 20.00 MiB/s [2024-11-20T05:47:25.479Z] 5031.67 IOPS, 19.65 MiB/s [2024-11-20T05:47:26.413Z] 5085.50 IOPS, 19.87 MiB/s [2024-11-20T05:47:27.405Z] 5088.60 IOPS, 19.88 MiB/s [2024-11-20T05:47:28.338Z] 5022.33 IOPS, 19.62 MiB/s [2024-11-20T05:47:29.272Z] 5042.86 IOPS, 19.70 MiB/s [2024-11-20T05:47:30.207Z] 5026.25 IOPS, 19.63 MiB/s [2024-11-20T05:47:31.142Z] 5062.89 IOPS, 19.78 MiB/s [2024-11-20T05:47:31.142Z] 5098.10 IOPS, 19.91 MiB/s 00:36:11.223 Latency(us) 00:36:11.223 [2024-11-20T05:47:31.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.223 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:36:11.223 Verification LBA range: start 0x0 length 0x2000 00:36:11.223 TLSTESTn1 : 10.01 5103.22 19.93 0.00 0.00 25040.11 4962.01 30458.64 00:36:11.223 [2024-11-20T05:47:31.142Z] =================================================================================================================== 00:36:11.223 [2024-11-20T05:47:31.142Z] Total : 5103.22 19.93 0.00 0.00 25040.11 4962.01 30458.64 00:36:11.223 { 00:36:11.223 "results": [ 00:36:11.223 { 00:36:11.223 "job": "TLSTESTn1", 00:36:11.223 "core_mask": "0x4", 00:36:11.223 "workload": "verify", 00:36:11.223 "status": "finished", 00:36:11.223 "verify_range": { 00:36:11.223 "start": 0, 00:36:11.223 "length": 8192 00:36:11.223 }, 00:36:11.223 "queue_depth": 128, 00:36:11.223 "io_size": 4096, 00:36:11.223 "runtime": 10.014466, 00:36:11.223 "iops": 5103.21768529645, 00:36:11.223 "mibps": 19.93444408318926, 00:36:11.223 "io_failed": 0, 00:36:11.223 "io_timeout": 0, 00:36:11.223 "avg_latency_us": 25040.111202971228, 00:36:11.223 "min_latency_us": 4962.011428571429, 00:36:11.223 "max_latency_us": 30458.63619047619 00:36:11.223 } 00:36:11.223 ], 00:36:11.223 "core_count": 1 00:36:11.223 } 00:36:11.223 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:36:11.223 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:36:11.223 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:36:11.223 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:36:11.223 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:36:11.223 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:36:11.223 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:36:11.223 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:36:11.223 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:36:11.223 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:36:11.223 nvmf_trace.0 00:36:11.482 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:36:11.482 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85177 00:36:11.482 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 85177 ']' 00:36:11.482 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 85177 00:36:11.483 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:36:11.483 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:11.483 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85177 00:36:11.483 killing process with pid 85177 00:36:11.483 Received shutdown signal, test time was about 10.000000 seconds 00:36:11.483 00:36:11.483 Latency(us) 00:36:11.483 [2024-11-20T05:47:31.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.483 [2024-11-20T05:47:31.402Z] =================================================================================================================== 00:36:11.483 [2024-11-20T05:47:31.402Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:11.483 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:36:11.483 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:36:11.483 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85177' 00:36:11.483 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 85177 00:36:11.483 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 85177 00:36:11.483 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:36:11.483 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:11.483 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:11.748 rmmod nvme_tcp 00:36:11.748 rmmod nvme_fabrics 00:36:11.748 rmmod nvme_keyring 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 85123 ']' 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 85123 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 85123 ']' 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 85123 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85123 00:36:11.748 killing process with pid 85123 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85123' 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 85123 00:36:11.748 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 85123 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:12.016 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:12.017 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:12.017 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:12.017 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.017 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.017 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.274 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:36:12.275 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.m2t 00:36:12.275 00:36:12.275 real 0m15.295s 00:36:12.275 user 0m20.216s 00:36:12.275 sys 0m6.736s 00:36:12.275 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:12.275 05:47:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:36:12.275 ************************************ 00:36:12.275 END TEST nvmf_fips 00:36:12.275 ************************************ 00:36:12.275 05:47:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:36:12.275 05:47:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:12.275 05:47:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:12.275 05:47:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:36:12.275 ************************************ 00:36:12.275 START TEST nvmf_control_msg_list 00:36:12.275 ************************************ 00:36:12.275 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:36:12.275 * Looking for test storage... 00:36:12.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:12.275 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:12.275 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:36:12.275 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:12.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.533 --rc genhtml_branch_coverage=1 00:36:12.533 --rc genhtml_function_coverage=1 00:36:12.533 --rc genhtml_legend=1 00:36:12.533 --rc geninfo_all_blocks=1 00:36:12.533 --rc geninfo_unexecuted_blocks=1 00:36:12.533 00:36:12.533 ' 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:12.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.533 --rc genhtml_branch_coverage=1 00:36:12.533 --rc genhtml_function_coverage=1 00:36:12.533 --rc genhtml_legend=1 00:36:12.533 --rc geninfo_all_blocks=1 00:36:12.533 --rc geninfo_unexecuted_blocks=1 00:36:12.533 00:36:12.533 ' 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:12.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.533 --rc genhtml_branch_coverage=1 00:36:12.533 --rc genhtml_function_coverage=1 00:36:12.533 --rc genhtml_legend=1 00:36:12.533 --rc geninfo_all_blocks=1 00:36:12.533 --rc geninfo_unexecuted_blocks=1 00:36:12.533 00:36:12.533 ' 00:36:12.533 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:12.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:12.533 --rc genhtml_branch_coverage=1 00:36:12.534 --rc genhtml_function_coverage=1 00:36:12.534 --rc genhtml_legend=1 00:36:12.534 --rc geninfo_all_blocks=1 00:36:12.534 --rc geninfo_unexecuted_blocks=1 00:36:12.534 00:36:12.534 ' 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:12.534 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:12.534 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:12.534 Cannot find device "nvmf_init_br" 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:12.535 Cannot find device "nvmf_init_br2" 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:12.535 Cannot find device "nvmf_tgt_br" 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:12.535 Cannot find device "nvmf_tgt_br2" 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:12.535 Cannot find device "nvmf_init_br" 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:12.535 Cannot find device "nvmf_init_br2" 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:12.535 Cannot find device "nvmf_tgt_br" 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:12.535 Cannot find device "nvmf_tgt_br2" 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:12.535 Cannot find device "nvmf_br" 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:12.535 Cannot find device "nvmf_init_if" 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:12.535 Cannot find device "nvmf_init_if2" 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:12.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:12.535 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:12.535 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:12.793 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:12.793 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:36:12.793 00:36:12.793 --- 10.0.0.3 ping statistics --- 00:36:12.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.793 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:12.793 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:12.793 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:36:12.793 00:36:12.793 --- 10.0.0.4 ping statistics --- 00:36:12.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.793 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:36:12.793 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:12.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:12.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:36:12.793 00:36:12.793 --- 10.0.0.1 ping statistics --- 00:36:12.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.794 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:12.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:12.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:36:12.794 00:36:12.794 --- 10.0.0.2 ping statistics --- 00:36:12.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:12.794 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=85604 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 85604 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 85604 ']' 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:12.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:12.794 05:47:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:36:13.052 [2024-11-20 05:47:32.751408] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:13.052 [2024-11-20 05:47:32.751496] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:13.052 [2024-11-20 05:47:32.893942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.052 [2024-11-20 05:47:32.944609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:13.052 [2024-11-20 05:47:32.944661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:13.052 [2024-11-20 05:47:32.944672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:13.052 [2024-11-20 05:47:32.944682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:13.052 [2024-11-20 05:47:32.944701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:13.052 [2024-11-20 05:47:32.945027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:36:13.311 [2024-11-20 05:47:33.108241] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:36:13.311 Malloc0 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:36:13.311 [2024-11-20 05:47:33.145204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85641 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85642 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85643 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85641 00:36:13.311 05:47:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:36:13.570 [2024-11-20 05:47:33.323476] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:13.570 [2024-11-20 05:47:33.343819] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:13.570 [2024-11-20 05:47:33.344167] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:14.506 Initializing NVMe Controllers 00:36:14.506 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:36:14.506 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:36:14.506 Initialization complete. Launching workers. 00:36:14.506 ======================================================== 00:36:14.506 Latency(us) 00:36:14.506 Device Information : IOPS MiB/s Average min max 00:36:14.506 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4559.00 17.81 219.05 90.30 551.26 00:36:14.506 ======================================================== 00:36:14.506 Total : 4559.00 17.81 219.05 90.30 551.26 00:36:14.506 00:36:14.506 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85642 00:36:14.506 Initializing NVMe Controllers 00:36:14.506 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:36:14.506 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:36:14.506 Initialization complete. Launching workers. 00:36:14.506 ======================================================== 00:36:14.506 Latency(us) 00:36:14.506 Device Information : IOPS MiB/s Average min max 00:36:14.506 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4540.00 17.73 219.98 100.73 308.62 00:36:14.506 ======================================================== 00:36:14.506 Total : 4540.00 17.73 219.98 100.73 308.62 00:36:14.506 00:36:14.506 Initializing NVMe Controllers 00:36:14.506 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:36:14.506 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:36:14.506 Initialization complete. Launching workers. 00:36:14.506 ======================================================== 00:36:14.506 Latency(us) 00:36:14.506 Device Information : IOPS MiB/s Average min max 00:36:14.506 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4534.00 17.71 220.26 126.17 541.57 00:36:14.506 ======================================================== 00:36:14.506 Total : 4534.00 17.71 220.26 126.17 541.57 00:36:14.506 00:36:14.506 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85643 00:36:14.506 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:36:14.506 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:36:14.506 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:14.506 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:14.765 rmmod nvme_tcp 00:36:14.765 rmmod nvme_fabrics 00:36:14.765 rmmod nvme_keyring 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 85604 ']' 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 85604 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 85604 ']' 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 85604 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85604 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:14.765 killing process with pid 85604 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85604' 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 85604 00:36:14.765 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 85604 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:15.024 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:15.283 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:15.283 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.283 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.283 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.283 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:36:15.283 00:36:15.283 real 0m2.946s 00:36:15.283 user 0m4.425s 00:36:15.283 sys 0m1.729s 00:36:15.283 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:15.283 05:47:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:36:15.283 ************************************ 00:36:15.283 END TEST nvmf_control_msg_list 00:36:15.283 ************************************ 00:36:15.283 05:47:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:36:15.283 05:47:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:15.283 05:47:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:15.283 05:47:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:36:15.283 ************************************ 00:36:15.283 START TEST nvmf_wait_for_buf 00:36:15.283 ************************************ 00:36:15.283 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:36:15.283 * Looking for test storage... 00:36:15.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:15.283 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:15.283 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:36:15.283 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:15.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.542 --rc genhtml_branch_coverage=1 00:36:15.542 --rc genhtml_function_coverage=1 00:36:15.542 --rc genhtml_legend=1 00:36:15.542 --rc geninfo_all_blocks=1 00:36:15.542 --rc geninfo_unexecuted_blocks=1 00:36:15.542 00:36:15.542 ' 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:15.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.542 --rc genhtml_branch_coverage=1 00:36:15.542 --rc genhtml_function_coverage=1 00:36:15.542 --rc genhtml_legend=1 00:36:15.542 --rc geninfo_all_blocks=1 00:36:15.542 --rc geninfo_unexecuted_blocks=1 00:36:15.542 00:36:15.542 ' 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:15.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.542 --rc genhtml_branch_coverage=1 00:36:15.542 --rc genhtml_function_coverage=1 00:36:15.542 --rc genhtml_legend=1 00:36:15.542 --rc geninfo_all_blocks=1 00:36:15.542 --rc geninfo_unexecuted_blocks=1 00:36:15.542 00:36:15.542 ' 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:15.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.542 --rc genhtml_branch_coverage=1 00:36:15.542 --rc genhtml_function_coverage=1 00:36:15.542 --rc genhtml_legend=1 00:36:15.542 --rc geninfo_all_blocks=1 00:36:15.542 --rc geninfo_unexecuted_blocks=1 00:36:15.542 00:36:15.542 ' 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.542 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:15.543 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:15.543 Cannot find device "nvmf_init_br" 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:15.543 Cannot find device "nvmf_init_br2" 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:15.543 Cannot find device "nvmf_tgt_br" 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:15.543 Cannot find device "nvmf_tgt_br2" 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:15.543 Cannot find device "nvmf_init_br" 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:15.543 Cannot find device "nvmf_init_br2" 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:15.543 Cannot find device "nvmf_tgt_br" 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:15.543 Cannot find device "nvmf_tgt_br2" 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:15.543 Cannot find device "nvmf_br" 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:15.543 Cannot find device "nvmf_init_if" 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:15.543 Cannot find device "nvmf_init_if2" 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:15.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:15.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:15.543 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:15.802 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:15.802 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:36:15.802 00:36:15.802 --- 10.0.0.3 ping statistics --- 00:36:15.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.802 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:15.802 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:15.802 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:36:15.802 00:36:15.802 --- 10.0.0.4 ping statistics --- 00:36:15.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.802 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:15.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:15.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:36:15.802 00:36:15.802 --- 10.0.0.1 ping statistics --- 00:36:15.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.802 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:36:15.802 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:15.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:15.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:36:15.802 00:36:15.803 --- 10.0.0.2 ping statistics --- 00:36:15.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.803 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:36:15.803 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:15.803 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:36:15.803 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:15.803 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:15.803 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:15.803 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:15.803 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:15.803 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:15.803 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:16.062 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:36:16.062 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:16.062 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:16.062 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:36:16.062 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=85876 00:36:16.062 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:16.062 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 85876 00:36:16.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:16.062 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 85876 ']' 00:36:16.062 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.062 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:16.062 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.062 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:16.062 05:47:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:36:16.062 [2024-11-20 05:47:35.793037] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:16.062 [2024-11-20 05:47:35.793305] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:16.062 [2024-11-20 05:47:35.937837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.321 [2024-11-20 05:47:35.990287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:16.321 [2024-11-20 05:47:35.990338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:16.321 [2024-11-20 05:47:35.990348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:16.321 [2024-11-20 05:47:35.990356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:16.321 [2024-11-20 05:47:35.990364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:16.321 [2024-11-20 05:47:35.990681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:36:16.321 Malloc0 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:36:16.321 [2024-11-20 05:47:36.221262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.321 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:36:16.581 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.581 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:36:16.581 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.581 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:36:16.581 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.581 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:36:16.581 [2024-11-20 05:47:36.257362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:16.581 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.581 05:47:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:36:16.581 [2024-11-20 05:47:36.461552] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:17.959 Initializing NVMe Controllers 00:36:17.959 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:36:17.959 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:36:17.959 Initialization complete. Launching workers. 00:36:17.959 ======================================================== 00:36:17.959 Latency(us) 00:36:17.959 Device Information : IOPS MiB/s Average min max 00:36:17.959 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.88 15.98 32404.67 8020.67 65012.96 00:36:17.959 ======================================================== 00:36:17.959 Total : 127.88 15.98 32404.67 8020.67 65012.96 00:36:17.959 00:36:17.960 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:36:17.960 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.960 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:36:17.960 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:36:17.960 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.960 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:36:17.960 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:36:17.960 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:36:17.960 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:36:17.960 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:17.960 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:18.219 rmmod nvme_tcp 00:36:18.219 rmmod nvme_fabrics 00:36:18.219 rmmod nvme_keyring 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 85876 ']' 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 85876 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 85876 ']' 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 85876 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85876 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:18.219 killing process with pid 85876 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85876' 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 85876 00:36:18.219 05:47:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 85876 00:36:18.219 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:18.219 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:18.219 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:18.219 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:36:18.516 00:36:18.516 real 0m3.364s 00:36:18.516 user 0m2.542s 00:36:18.516 sys 0m0.957s 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:18.516 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:36:18.516 ************************************ 00:36:18.516 END TEST nvmf_wait_for_buf 00:36:18.516 ************************************ 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:36:18.778 ************************************ 00:36:18.778 START TEST nvmf_nsid 00:36:18.778 ************************************ 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:36:18.778 * Looking for test storage... 00:36:18.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:18.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.778 --rc genhtml_branch_coverage=1 00:36:18.778 --rc genhtml_function_coverage=1 00:36:18.778 --rc genhtml_legend=1 00:36:18.778 --rc geninfo_all_blocks=1 00:36:18.778 --rc geninfo_unexecuted_blocks=1 00:36:18.778 00:36:18.778 ' 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:18.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.778 --rc genhtml_branch_coverage=1 00:36:18.778 --rc genhtml_function_coverage=1 00:36:18.778 --rc genhtml_legend=1 00:36:18.778 --rc geninfo_all_blocks=1 00:36:18.778 --rc geninfo_unexecuted_blocks=1 00:36:18.778 00:36:18.778 ' 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:18.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.778 --rc genhtml_branch_coverage=1 00:36:18.778 --rc genhtml_function_coverage=1 00:36:18.778 --rc genhtml_legend=1 00:36:18.778 --rc geninfo_all_blocks=1 00:36:18.778 --rc geninfo_unexecuted_blocks=1 00:36:18.778 00:36:18.778 ' 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:18.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.778 --rc genhtml_branch_coverage=1 00:36:18.778 --rc genhtml_function_coverage=1 00:36:18.778 --rc genhtml_legend=1 00:36:18.778 --rc geninfo_all_blocks=1 00:36:18.778 --rc geninfo_unexecuted_blocks=1 00:36:18.778 00:36:18.778 ' 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:18.778 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:19.038 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:19.039 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:19.039 Cannot find device "nvmf_init_br" 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:19.039 Cannot find device "nvmf_init_br2" 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:19.039 Cannot find device "nvmf_tgt_br" 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:19.039 Cannot find device "nvmf_tgt_br2" 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:19.039 Cannot find device "nvmf_init_br" 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:19.039 Cannot find device "nvmf_init_br2" 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:19.039 Cannot find device "nvmf_tgt_br" 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:19.039 Cannot find device "nvmf_tgt_br2" 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:19.039 Cannot find device "nvmf_br" 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:19.039 Cannot find device "nvmf_init_if" 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:19.039 Cannot find device "nvmf_init_if2" 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:19.039 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:19.039 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:19.039 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:19.299 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:19.299 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:19.299 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:19.299 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:19.299 05:47:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:19.299 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:19.299 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:36:19.299 00:36:19.299 --- 10.0.0.3 ping statistics --- 00:36:19.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.299 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:19.299 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:19.299 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:36:19.299 00:36:19.299 --- 10.0.0.4 ping statistics --- 00:36:19.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.299 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:19.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:19.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:36:19.299 00:36:19.299 --- 10.0.0.1 ping statistics --- 00:36:19.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.299 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:19.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:19.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:36:19.299 00:36:19.299 --- 10.0.0.2 ping statistics --- 00:36:19.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.299 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:19.299 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=86156 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 86156 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 86156 ']' 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:19.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:19.300 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:36:19.559 [2024-11-20 05:47:39.268317] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:19.559 [2024-11-20 05:47:39.268413] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:19.559 [2024-11-20 05:47:39.428282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.818 [2024-11-20 05:47:39.486714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:19.818 [2024-11-20 05:47:39.486775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:19.818 [2024-11-20 05:47:39.486790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:19.818 [2024-11-20 05:47:39.486804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:19.818 [2024-11-20 05:47:39.486815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:19.818 [2024-11-20 05:47:39.487174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=86182 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=8589a2b7-781c-4c3b-be28-9cec03b5dc66 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=3aa67fa5-91a9-4324-bdeb-c9df7897db92 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=da553b20-5366-4a24-979d-f2571f7523fd 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.818 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:36:19.818 null0 00:36:19.818 null1 00:36:19.818 null2 00:36:19.818 [2024-11-20 05:47:39.713599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:19.818 [2024-11-20 05:47:39.719804] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:19.818 [2024-11-20 05:47:39.719894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86182 ] 00:36:20.076 [2024-11-20 05:47:39.737719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:20.076 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.076 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 86182 /var/tmp/tgt2.sock 00:36:20.076 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 86182 ']' 00:36:20.076 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:36:20.076 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:20.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:36:20.076 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:36:20.076 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:20.076 05:47:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:36:20.076 [2024-11-20 05:47:39.874911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.076 [2024-11-20 05:47:39.968492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:20.642 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:20.642 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:36:20.642 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:36:20.900 [2024-11-20 05:47:40.690122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:20.900 [2024-11-20 05:47:40.706234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:36:20.900 nvme0n1 nvme0n2 00:36:20.900 nvme1n1 00:36:20.900 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:36:20.900 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:36:20.900 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:21.158 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:36:21.158 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:36:21.158 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:36:21.158 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:36:21.158 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:36:21.158 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:36:21.158 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:36:21.158 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:36:21.158 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:36:21.158 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:36:21.158 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:36:21.158 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:36:21.158 05:47:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 8589a2b7-781c-4c3b-be28-9cec03b5dc66 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8589a2b7781c4c3bbe289cec03b5dc66 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8589A2B7781C4C3BBE289CEC03B5DC66 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 8589A2B7781C4C3BBE289CEC03B5DC66 == \8\5\8\9\A\2\B\7\7\8\1\C\4\C\3\B\B\E\2\8\9\C\E\C\0\3\B\5\D\C\6\6 ]] 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:36:22.093 05:47:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:36:22.093 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:36:22.093 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 3aa67fa5-91a9-4324-bdeb-c9df7897db92 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3aa67fa591a94324bdebc9df7897db92 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3AA67FA591A94324BDEBC9DF7897DB92 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 3AA67FA591A94324BDEBC9DF7897DB92 == \3\A\A\6\7\F\A\5\9\1\A\9\4\3\2\4\B\D\E\B\C\9\D\F\7\8\9\7\D\B\9\2 ]] 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid da553b20-5366-4a24-979d-f2571f7523fd 00:36:22.352 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:36:22.353 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:36:22.353 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:36:22.353 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:36:22.353 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:36:22.353 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=da553b2053664a24979df2571f7523fd 00:36:22.353 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DA553B2053664A24979DF2571F7523FD 00:36:22.353 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ DA553B2053664A24979DF2571F7523FD == \D\A\5\5\3\B\2\0\5\3\6\6\4\A\2\4\9\7\9\D\F\2\5\7\1\F\7\5\2\3\F\D ]] 00:36:22.353 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:36:22.611 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:36:22.611 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:36:22.611 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 86182 00:36:22.611 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 86182 ']' 00:36:22.611 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 86182 00:36:22.611 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:36:22.611 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:22.611 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86182 00:36:22.611 killing process with pid 86182 00:36:22.611 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:22.611 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:22.611 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86182' 00:36:22.611 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 86182 00:36:22.611 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 86182 00:36:23.179 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:36:23.179 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:23.179 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:36:23.179 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:23.179 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:36:23.179 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:23.179 05:47:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:23.179 rmmod nvme_tcp 00:36:23.179 rmmod nvme_fabrics 00:36:23.179 rmmod nvme_keyring 00:36:23.179 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:23.179 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:36:23.179 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:36:23.179 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 86156 ']' 00:36:23.179 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 86156 00:36:23.179 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 86156 ']' 00:36:23.179 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 86156 00:36:23.179 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:36:23.179 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:23.179 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86156 00:36:23.179 killing process with pid 86156 00:36:23.179 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:23.179 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:23.179 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86156' 00:36:23.179 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 86156 00:36:23.179 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 86156 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:23.437 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:23.438 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:23.696 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:23.696 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:23.696 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:23.696 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:23.696 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:23.696 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:23.696 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:23.696 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:23.696 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:36:23.696 00:36:23.696 real 0m5.016s 00:36:23.696 user 0m7.242s 00:36:23.696 sys 0m1.847s 00:36:23.696 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:23.696 05:47:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:36:23.696 ************************************ 00:36:23.696 END TEST nvmf_nsid 00:36:23.696 ************************************ 00:36:23.696 05:47:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:36:23.696 00:36:23.696 real 7m16.028s 00:36:23.696 user 17m2.671s 00:36:23.696 sys 1m50.352s 00:36:23.696 05:47:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:23.696 ************************************ 00:36:23.696 END TEST nvmf_target_extra 00:36:23.696 ************************************ 00:36:23.696 05:47:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:36:23.696 05:47:43 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:36:23.696 05:47:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:23.696 05:47:43 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:23.696 05:47:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:23.696 ************************************ 00:36:23.696 START TEST nvmf_host 00:36:23.696 ************************************ 00:36:23.696 05:47:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:36:23.955 * Looking for test storage... 00:36:23.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:36:23.955 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:23.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:23.956 --rc genhtml_branch_coverage=1 00:36:23.956 --rc genhtml_function_coverage=1 00:36:23.956 --rc genhtml_legend=1 00:36:23.956 --rc geninfo_all_blocks=1 00:36:23.956 --rc geninfo_unexecuted_blocks=1 00:36:23.956 00:36:23.956 ' 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:23.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:23.956 --rc genhtml_branch_coverage=1 00:36:23.956 --rc genhtml_function_coverage=1 00:36:23.956 --rc genhtml_legend=1 00:36:23.956 --rc geninfo_all_blocks=1 00:36:23.956 --rc geninfo_unexecuted_blocks=1 00:36:23.956 00:36:23.956 ' 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:23.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:23.956 --rc genhtml_branch_coverage=1 00:36:23.956 --rc genhtml_function_coverage=1 00:36:23.956 --rc genhtml_legend=1 00:36:23.956 --rc geninfo_all_blocks=1 00:36:23.956 --rc geninfo_unexecuted_blocks=1 00:36:23.956 00:36:23.956 ' 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:23.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:23.956 --rc genhtml_branch_coverage=1 00:36:23.956 --rc genhtml_function_coverage=1 00:36:23.956 --rc genhtml_legend=1 00:36:23.956 --rc geninfo_all_blocks=1 00:36:23.956 --rc geninfo_unexecuted_blocks=1 00:36:23.956 00:36:23.956 ' 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:23.956 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.956 ************************************ 00:36:23.956 START TEST nvmf_multicontroller 00:36:23.956 ************************************ 00:36:23.956 05:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:36:24.215 * Looking for test storage... 00:36:24.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:36:24.215 05:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:24.215 05:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:36:24.215 05:47:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:24.215 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:24.215 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:24.215 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:24.215 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:24.215 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:36:24.215 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:36:24.215 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:36:24.215 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:36:24.215 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:36:24.215 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:36:24.215 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:36:24.215 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:24.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.216 --rc genhtml_branch_coverage=1 00:36:24.216 --rc genhtml_function_coverage=1 00:36:24.216 --rc genhtml_legend=1 00:36:24.216 --rc geninfo_all_blocks=1 00:36:24.216 --rc geninfo_unexecuted_blocks=1 00:36:24.216 00:36:24.216 ' 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:24.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.216 --rc genhtml_branch_coverage=1 00:36:24.216 --rc genhtml_function_coverage=1 00:36:24.216 --rc genhtml_legend=1 00:36:24.216 --rc geninfo_all_blocks=1 00:36:24.216 --rc geninfo_unexecuted_blocks=1 00:36:24.216 00:36:24.216 ' 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:24.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.216 --rc genhtml_branch_coverage=1 00:36:24.216 --rc genhtml_function_coverage=1 00:36:24.216 --rc genhtml_legend=1 00:36:24.216 --rc geninfo_all_blocks=1 00:36:24.216 --rc geninfo_unexecuted_blocks=1 00:36:24.216 00:36:24.216 ' 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:24.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.216 --rc genhtml_branch_coverage=1 00:36:24.216 --rc genhtml_function_coverage=1 00:36:24.216 --rc genhtml_legend=1 00:36:24.216 --rc geninfo_all_blocks=1 00:36:24.216 --rc geninfo_unexecuted_blocks=1 00:36:24.216 00:36:24.216 ' 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:24.216 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.216 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:24.217 Cannot find device "nvmf_init_br" 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:24.217 Cannot find device "nvmf_init_br2" 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:24.217 Cannot find device "nvmf_tgt_br" 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:36:24.217 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:24.476 Cannot find device "nvmf_tgt_br2" 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:24.476 Cannot find device "nvmf_init_br" 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:24.476 Cannot find device "nvmf_init_br2" 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:24.476 Cannot find device "nvmf_tgt_br" 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:24.476 Cannot find device "nvmf_tgt_br2" 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:24.476 Cannot find device "nvmf_br" 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:24.476 Cannot find device "nvmf_init_if" 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:24.476 Cannot find device "nvmf_init_if2" 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:24.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:24.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:36:24.476 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:24.477 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:24.477 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:24.477 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:24.477 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:24.477 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:24.477 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:24.477 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:24.477 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:24.477 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:24.477 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:24.477 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:24.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:24.736 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:36:24.736 00:36:24.736 --- 10.0.0.3 ping statistics --- 00:36:24.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.736 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:24.736 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:24.736 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:36:24.736 00:36:24.736 --- 10.0.0.4 ping statistics --- 00:36:24.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.736 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:24.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:24.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:36:24.736 00:36:24.736 --- 10.0.0.1 ping statistics --- 00:36:24.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.736 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:24.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:24.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:36:24.736 00:36:24.736 --- 10.0.0.2 ping statistics --- 00:36:24.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.736 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=86557 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 86557 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 86557 ']' 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:24.736 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:24.737 [2024-11-20 05:47:44.636486] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:24.737 [2024-11-20 05:47:44.636579] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:25.032 [2024-11-20 05:47:44.784562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:25.032 [2024-11-20 05:47:44.830648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:25.032 [2024-11-20 05:47:44.830691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:25.032 [2024-11-20 05:47:44.830701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:25.032 [2024-11-20 05:47:44.830710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:25.032 [2024-11-20 05:47:44.830717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:25.032 [2024-11-20 05:47:44.831623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:25.032 [2024-11-20 05:47:44.831793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:25.033 [2024-11-20 05:47:44.831795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:25.293 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:25.293 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:36:25.293 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:25.293 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:25.293 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.293 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:25.293 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:25.293 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.293 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.293 [2024-11-20 05:47:44.989248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:25.293 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.293 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:25.293 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.293 05:47:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.293 Malloc0 00:36:25.293 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.293 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:25.293 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.293 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.293 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.293 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.294 [2024-11-20 05:47:45.048850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.294 [2024-11-20 05:47:45.056791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.294 Malloc1 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86600 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86600 /var/tmp/bdevperf.sock 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 86600 ']' 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:25.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:25.294 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.553 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:25.553 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:36:25.553 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:36:25.553 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.553 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.813 NVMe0n1 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.813 1 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.813 2024/11/20 05:47:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:36:25.813 request: 00:36:25.813 { 00:36:25.813 "method": "bdev_nvme_attach_controller", 00:36:25.813 "params": { 00:36:25.813 "name": "NVMe0", 00:36:25.813 "trtype": "tcp", 00:36:25.813 "traddr": "10.0.0.3", 00:36:25.813 "adrfam": "ipv4", 00:36:25.813 "trsvcid": "4420", 00:36:25.813 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:25.813 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:36:25.813 "hostaddr": "10.0.0.1", 00:36:25.813 "prchk_reftag": false, 00:36:25.813 "prchk_guard": false, 00:36:25.813 "hdgst": false, 00:36:25.813 "ddgst": false, 00:36:25.813 "allow_unrecognized_csi": false 00:36:25.813 } 00:36:25.813 } 00:36:25.813 Got JSON-RPC error response 00:36:25.813 GoRPCClient: error on JSON-RPC call 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.813 2024/11/20 05:47:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:36:25.813 request: 00:36:25.813 { 00:36:25.813 "method": "bdev_nvme_attach_controller", 00:36:25.813 "params": { 00:36:25.813 "name": "NVMe0", 00:36:25.813 "trtype": "tcp", 00:36:25.813 "traddr": "10.0.0.3", 00:36:25.813 "adrfam": "ipv4", 00:36:25.813 "trsvcid": "4420", 00:36:25.813 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:25.813 "hostaddr": "10.0.0.1", 00:36:25.813 "prchk_reftag": false, 00:36:25.813 "prchk_guard": false, 00:36:25.813 "hdgst": false, 00:36:25.813 "ddgst": false, 00:36:25.813 "allow_unrecognized_csi": false 00:36:25.813 } 00:36:25.813 } 00:36:25.813 Got JSON-RPC error response 00:36:25.813 GoRPCClient: error on JSON-RPC call 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:36:25.813 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.814 2024/11/20 05:47:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:36:25.814 request: 00:36:25.814 { 00:36:25.814 "method": "bdev_nvme_attach_controller", 00:36:25.814 "params": { 00:36:25.814 "name": "NVMe0", 00:36:25.814 "trtype": "tcp", 00:36:25.814 "traddr": "10.0.0.3", 00:36:25.814 "adrfam": "ipv4", 00:36:25.814 "trsvcid": "4420", 00:36:25.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:25.814 "hostaddr": "10.0.0.1", 00:36:25.814 "prchk_reftag": false, 00:36:25.814 "prchk_guard": false, 00:36:25.814 "hdgst": false, 00:36:25.814 "ddgst": false, 00:36:25.814 "multipath": "disable", 00:36:25.814 "allow_unrecognized_csi": false 00:36:25.814 } 00:36:25.814 } 00:36:25.814 Got JSON-RPC error response 00:36:25.814 GoRPCClient: error on JSON-RPC call 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.814 2024/11/20 05:47:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:36:25.814 request: 00:36:25.814 { 00:36:25.814 "method": "bdev_nvme_attach_controller", 00:36:25.814 "params": { 00:36:25.814 "name": "NVMe0", 00:36:25.814 "trtype": "tcp", 00:36:25.814 "traddr": "10.0.0.3", 00:36:25.814 "adrfam": "ipv4", 00:36:25.814 "trsvcid": "4420", 00:36:25.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:25.814 "hostaddr": "10.0.0.1", 00:36:25.814 "prchk_reftag": false, 00:36:25.814 "prchk_guard": false, 00:36:25.814 "hdgst": false, 00:36:25.814 "ddgst": false, 00:36:25.814 "multipath": "failover", 00:36:25.814 "allow_unrecognized_csi": false 00:36:25.814 } 00:36:25.814 } 00:36:25.814 Got JSON-RPC error response 00:36:25.814 GoRPCClient: error on JSON-RPC call 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.814 NVMe0n1 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.814 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:26.072 00:36:26.072 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.072 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:26.072 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:36:26.073 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.073 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:26.073 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.073 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:36:26.073 05:47:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:27.447 { 00:36:27.447 "results": [ 00:36:27.447 { 00:36:27.447 "job": "NVMe0n1", 00:36:27.447 "core_mask": "0x1", 00:36:27.447 "workload": "write", 00:36:27.447 "status": "finished", 00:36:27.447 "queue_depth": 128, 00:36:27.447 "io_size": 4096, 00:36:27.447 "runtime": 1.004846, 00:36:27.447 "iops": 21348.544951166645, 00:36:27.447 "mibps": 83.3927537154947, 00:36:27.447 "io_failed": 0, 00:36:27.447 "io_timeout": 0, 00:36:27.447 "avg_latency_us": 5987.149964039317, 00:36:27.447 "min_latency_us": 1810.0419047619048, 00:36:27.447 "max_latency_us": 10673.005714285715 00:36:27.447 } 00:36:27.447 ], 00:36:27.447 "core_count": 1 00:36:27.447 } 00:36:27.447 05:47:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:36:27.447 05:47:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.447 05:47:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:27.447 05:47:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.447 05:47:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:36:27.447 05:47:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:36:27.447 05:47:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.447 05:47:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:27.447 nvme1n1 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:27.447 nvme1n1 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 86600 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 86600 ']' 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 86600 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86600 00:36:27.447 killing process with pid 86600 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86600' 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 86600 00:36:27.447 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 86600 00:36:27.704 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:27.704 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.704 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:27.704 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.704 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:27.704 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.704 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:27.704 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.704 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:36:27.704 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:36:27.704 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:36:27.704 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:36:27.704 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:36:27.704 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:36:27.704 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:36:27.704 [2024-11-20 05:47:45.161739] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:27.704 [2024-11-20 05:47:45.162189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86600 ] 00:36:27.704 [2024-11-20 05:47:45.297292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.704 [2024-11-20 05:47:45.344808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:27.704 [2024-11-20 05:47:45.761642] bdev.c:4746:bdev_name_add: *ERROR*: Bdev name 9f826a2f-2434-4bb5-83be-7193b175672b already exists 00:36:27.704 [2024-11-20 05:47:45.761711] bdev.c:7955:bdev_register: *ERROR*: Unable to add uuid:9f826a2f-2434-4bb5-83be-7193b175672b alias for bdev NVMe1n1 00:36:27.704 [2024-11-20 05:47:45.761739] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:36:27.704 Running I/O for 1 seconds... 00:36:27.704 21324.00 IOPS, 83.30 MiB/s 00:36:27.705 Latency(us) 00:36:27.705 [2024-11-20T05:47:47.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.705 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:36:27.705 NVMe0n1 : 1.00 21348.54 83.39 0.00 0.00 5987.15 1810.04 10673.01 00:36:27.705 [2024-11-20T05:47:47.624Z] =================================================================================================================== 00:36:27.705 [2024-11-20T05:47:47.624Z] Total : 21348.54 83.39 0.00 0.00 5987.15 1810.04 10673.01 00:36:27.705 Received shutdown signal, test time was about 1.000000 seconds 00:36:27.705 00:36:27.705 Latency(us) 00:36:27.705 [2024-11-20T05:47:47.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.705 [2024-11-20T05:47:47.624Z] =================================================================================================================== 00:36:27.705 [2024-11-20T05:47:47.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:27.705 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:27.705 rmmod nvme_tcp 00:36:27.705 rmmod nvme_fabrics 00:36:27.705 rmmod nvme_keyring 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 86557 ']' 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 86557 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 86557 ']' 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 86557 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:27.705 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86557 00:36:27.962 killing process with pid 86557 00:36:27.962 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:27.962 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:27.962 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86557' 00:36:27.962 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 86557 00:36:27.962 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 86557 00:36:27.962 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:27.962 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:27.962 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:27.962 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:36:27.962 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:27.962 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:36:27.962 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:36:27.962 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:27.962 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:27.962 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:28.220 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:28.220 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:28.220 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:28.220 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:28.220 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:28.220 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:28.220 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:28.220 05:47:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:28.220 05:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:28.220 05:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:28.220 05:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:28.220 05:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:28.220 05:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:28.220 05:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:28.220 05:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:28.220 05:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:28.478 05:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:36:28.478 00:36:28.478 real 0m4.302s 00:36:28.478 user 0m11.667s 00:36:28.478 sys 0m1.338s 00:36:28.478 05:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:28.478 05:47:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:36:28.478 ************************************ 00:36:28.478 END TEST nvmf_multicontroller 00:36:28.478 ************************************ 00:36:28.478 05:47:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:36:28.478 05:47:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:28.478 05:47:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:28.478 05:47:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.478 ************************************ 00:36:28.478 START TEST nvmf_aer 00:36:28.478 ************************************ 00:36:28.478 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:36:28.478 * Looking for test storage... 00:36:28.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:36:28.478 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:28.478 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:36:28.478 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:28.478 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:28.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.479 --rc genhtml_branch_coverage=1 00:36:28.479 --rc genhtml_function_coverage=1 00:36:28.479 --rc genhtml_legend=1 00:36:28.479 --rc geninfo_all_blocks=1 00:36:28.479 --rc geninfo_unexecuted_blocks=1 00:36:28.479 00:36:28.479 ' 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:28.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.479 --rc genhtml_branch_coverage=1 00:36:28.479 --rc genhtml_function_coverage=1 00:36:28.479 --rc genhtml_legend=1 00:36:28.479 --rc geninfo_all_blocks=1 00:36:28.479 --rc geninfo_unexecuted_blocks=1 00:36:28.479 00:36:28.479 ' 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:28.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.479 --rc genhtml_branch_coverage=1 00:36:28.479 --rc genhtml_function_coverage=1 00:36:28.479 --rc genhtml_legend=1 00:36:28.479 --rc geninfo_all_blocks=1 00:36:28.479 --rc geninfo_unexecuted_blocks=1 00:36:28.479 00:36:28.479 ' 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:28.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.479 --rc genhtml_branch_coverage=1 00:36:28.479 --rc genhtml_function_coverage=1 00:36:28.479 --rc genhtml_legend=1 00:36:28.479 --rc geninfo_all_blocks=1 00:36:28.479 --rc geninfo_unexecuted_blocks=1 00:36:28.479 00:36:28.479 ' 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:28.479 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:36:28.737 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:28.737 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:28.737 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:28.738 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:28.738 Cannot find device "nvmf_init_br" 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:28.738 Cannot find device "nvmf_init_br2" 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:28.738 Cannot find device "nvmf_tgt_br" 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:28.738 Cannot find device "nvmf_tgt_br2" 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:28.738 Cannot find device "nvmf_init_br" 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:28.738 Cannot find device "nvmf_init_br2" 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:28.738 Cannot find device "nvmf_tgt_br" 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:28.738 Cannot find device "nvmf_tgt_br2" 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:28.738 Cannot find device "nvmf_br" 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:28.738 Cannot find device "nvmf_init_if" 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:28.738 Cannot find device "nvmf_init_if2" 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:28.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:28.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:28.738 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:28.997 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:28.997 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:36:28.997 00:36:28.997 --- 10.0.0.3 ping statistics --- 00:36:28.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.997 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:28.997 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:28.997 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:36:28.997 00:36:28.997 --- 10.0.0.4 ping statistics --- 00:36:28.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.997 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:28.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:28.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:36:28.997 00:36:28.997 --- 10.0.0.1 ping statistics --- 00:36:28.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.997 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:28.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:28.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:36:28.997 00:36:28.997 --- 10.0.0.2 ping statistics --- 00:36:28.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.997 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=86869 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 86869 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 86869 ']' 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:28.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:28.997 05:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:28.997 [2024-11-20 05:47:48.904016] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:28.997 [2024-11-20 05:47:48.904126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:29.255 [2024-11-20 05:47:49.065458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:29.255 [2024-11-20 05:47:49.126428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:29.255 [2024-11-20 05:47:49.126513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:29.255 [2024-11-20 05:47:49.126530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:29.255 [2024-11-20 05:47:49.126543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:29.255 [2024-11-20 05:47:49.126555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:29.255 [2024-11-20 05:47:49.127688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.255 [2024-11-20 05:47:49.127869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:29.255 [2024-11-20 05:47:49.127789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:29.255 [2024-11-20 05:47:49.127870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:30.190 05:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:30.190 05:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:36:30.190 05:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:30.190 05:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:30.190 05:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:30.190 05:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:30.190 05:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:30.190 05:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.190 05:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:30.190 [2024-11-20 05:47:49.964520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:30.190 05:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.190 05:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:36:30.190 05:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.190 05:47:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:30.190 Malloc0 00:36:30.190 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.190 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:36:30.190 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.190 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:30.190 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.190 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:30.190 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:30.191 [2024-11-20 05:47:50.033536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:30.191 [ 00:36:30.191 { 00:36:30.191 "allow_any_host": true, 00:36:30.191 "hosts": [], 00:36:30.191 "listen_addresses": [], 00:36:30.191 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:30.191 "subtype": "Discovery" 00:36:30.191 }, 00:36:30.191 { 00:36:30.191 "allow_any_host": true, 00:36:30.191 "hosts": [], 00:36:30.191 "listen_addresses": [ 00:36:30.191 { 00:36:30.191 "adrfam": "IPv4", 00:36:30.191 "traddr": "10.0.0.3", 00:36:30.191 "trsvcid": "4420", 00:36:30.191 "trtype": "TCP" 00:36:30.191 } 00:36:30.191 ], 00:36:30.191 "max_cntlid": 65519, 00:36:30.191 "max_namespaces": 2, 00:36:30.191 "min_cntlid": 1, 00:36:30.191 "model_number": "SPDK bdev Controller", 00:36:30.191 "namespaces": [ 00:36:30.191 { 00:36:30.191 "bdev_name": "Malloc0", 00:36:30.191 "name": "Malloc0", 00:36:30.191 "nguid": "8FBDCA1E8D0543418F35AB93063910EA", 00:36:30.191 "nsid": 1, 00:36:30.191 "uuid": "8fbdca1e-8d05-4341-8f35-ab93063910ea" 00:36:30.191 } 00:36:30.191 ], 00:36:30.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:30.191 "serial_number": "SPDK00000000000001", 00:36:30.191 "subtype": "NVMe" 00:36:30.191 } 00:36:30.191 ] 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=86923 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:36:30.191 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:36:30.449 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:36:30.449 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:36:30.449 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:36:30.449 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:30.450 Malloc1 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:30.450 Asynchronous Event Request test 00:36:30.450 Attaching to 10.0.0.3 00:36:30.450 Attached to 10.0.0.3 00:36:30.450 Registering asynchronous event callbacks... 00:36:30.450 Starting namespace attribute notice tests for all controllers... 00:36:30.450 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:36:30.450 aer_cb - Changed Namespace 00:36:30.450 Cleaning up... 00:36:30.450 [ 00:36:30.450 { 00:36:30.450 "allow_any_host": true, 00:36:30.450 "hosts": [], 00:36:30.450 "listen_addresses": [], 00:36:30.450 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:30.450 "subtype": "Discovery" 00:36:30.450 }, 00:36:30.450 { 00:36:30.450 "allow_any_host": true, 00:36:30.450 "hosts": [], 00:36:30.450 "listen_addresses": [ 00:36:30.450 { 00:36:30.450 "adrfam": "IPv4", 00:36:30.450 "traddr": "10.0.0.3", 00:36:30.450 "trsvcid": "4420", 00:36:30.450 "trtype": "TCP" 00:36:30.450 } 00:36:30.450 ], 00:36:30.450 "max_cntlid": 65519, 00:36:30.450 "max_namespaces": 2, 00:36:30.450 "min_cntlid": 1, 00:36:30.450 "model_number": "SPDK bdev Controller", 00:36:30.450 "namespaces": [ 00:36:30.450 { 00:36:30.450 "bdev_name": "Malloc0", 00:36:30.450 "name": "Malloc0", 00:36:30.450 "nguid": "8FBDCA1E8D0543418F35AB93063910EA", 00:36:30.450 "nsid": 1, 00:36:30.450 "uuid": "8fbdca1e-8d05-4341-8f35-ab93063910ea" 00:36:30.450 }, 00:36:30.450 { 00:36:30.450 "bdev_name": "Malloc1", 00:36:30.450 "name": "Malloc1", 00:36:30.450 "nguid": "5305217263CE4B5D8EC29BDA88C83132", 00:36:30.450 "nsid": 2, 00:36:30.450 "uuid": "53052172-63ce-4b5d-8ec2-9bda88c83132" 00:36:30.450 } 00:36:30.450 ], 00:36:30.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:30.450 "serial_number": "SPDK00000000000001", 00:36:30.450 "subtype": "NVMe" 00:36:30.450 } 00:36:30.450 ] 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 86923 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.450 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:30.708 rmmod nvme_tcp 00:36:30.708 rmmod nvme_fabrics 00:36:30.708 rmmod nvme_keyring 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 86869 ']' 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 86869 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 86869 ']' 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 86869 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86869 00:36:30.708 killing process with pid 86869 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86869' 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 86869 00:36:30.708 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 86869 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:30.966 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:31.224 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:31.224 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:31.225 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:31.225 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:31.225 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:31.225 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.225 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:31.225 05:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:31.225 05:47:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:36:31.225 00:36:31.225 real 0m2.846s 00:36:31.225 user 0m6.999s 00:36:31.225 sys 0m0.879s 00:36:31.225 05:47:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:31.225 05:47:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:36:31.225 ************************************ 00:36:31.225 END TEST nvmf_aer 00:36:31.225 ************************************ 00:36:31.225 05:47:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:36:31.225 05:47:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:31.225 05:47:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:31.225 05:47:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.225 ************************************ 00:36:31.225 START TEST nvmf_async_init 00:36:31.225 ************************************ 00:36:31.225 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:36:31.483 * Looking for test storage... 00:36:31.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:31.483 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:31.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.483 --rc genhtml_branch_coverage=1 00:36:31.483 --rc genhtml_function_coverage=1 00:36:31.483 --rc genhtml_legend=1 00:36:31.483 --rc geninfo_all_blocks=1 00:36:31.484 --rc geninfo_unexecuted_blocks=1 00:36:31.484 00:36:31.484 ' 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:31.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.484 --rc genhtml_branch_coverage=1 00:36:31.484 --rc genhtml_function_coverage=1 00:36:31.484 --rc genhtml_legend=1 00:36:31.484 --rc geninfo_all_blocks=1 00:36:31.484 --rc geninfo_unexecuted_blocks=1 00:36:31.484 00:36:31.484 ' 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:31.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.484 --rc genhtml_branch_coverage=1 00:36:31.484 --rc genhtml_function_coverage=1 00:36:31.484 --rc genhtml_legend=1 00:36:31.484 --rc geninfo_all_blocks=1 00:36:31.484 --rc geninfo_unexecuted_blocks=1 00:36:31.484 00:36:31.484 ' 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:31.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.484 --rc genhtml_branch_coverage=1 00:36:31.484 --rc genhtml_function_coverage=1 00:36:31.484 --rc genhtml_legend=1 00:36:31.484 --rc geninfo_all_blocks=1 00:36:31.484 --rc geninfo_unexecuted_blocks=1 00:36:31.484 00:36:31.484 ' 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:31.484 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=727dfc124e3545f69a68baf7c8102810 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:31.484 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:31.485 Cannot find device "nvmf_init_br" 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:31.485 Cannot find device "nvmf_init_br2" 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:31.485 Cannot find device "nvmf_tgt_br" 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:31.485 Cannot find device "nvmf_tgt_br2" 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:36:31.485 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:31.743 Cannot find device "nvmf_init_br" 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:31.743 Cannot find device "nvmf_init_br2" 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:31.743 Cannot find device "nvmf_tgt_br" 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:31.743 Cannot find device "nvmf_tgt_br2" 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:31.743 Cannot find device "nvmf_br" 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:31.743 Cannot find device "nvmf_init_if" 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:31.743 Cannot find device "nvmf_init_if2" 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:31.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:31.743 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:31.743 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:32.001 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:32.001 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:36:32.001 00:36:32.001 --- 10.0.0.3 ping statistics --- 00:36:32.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.001 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:32.001 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:32.001 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:36:32.001 00:36:32.001 --- 10.0.0.4 ping statistics --- 00:36:32.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.001 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:32.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:32.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:36:32.001 00:36:32.001 --- 10.0.0.1 ping statistics --- 00:36:32.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.001 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:32.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:32.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:36:32.001 00:36:32.001 --- 10.0.0.2 ping statistics --- 00:36:32.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:32.001 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=87150 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 87150 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 87150 ']' 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:32.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:32.001 05:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:32.001 [2024-11-20 05:47:51.912206] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:32.001 [2024-11-20 05:47:51.912873] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:32.271 [2024-11-20 05:47:52.075740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.271 [2024-11-20 05:47:52.133095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:32.271 [2024-11-20 05:47:52.133156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:32.271 [2024-11-20 05:47:52.133172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:32.271 [2024-11-20 05:47:52.133185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:32.271 [2024-11-20 05:47:52.133196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:32.271 [2024-11-20 05:47:52.133574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:32.528 [2024-11-20 05:47:52.316763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:32.528 null0 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 727dfc124e3545f69a68baf7c8102810 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:32.528 [2024-11-20 05:47:52.356886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.528 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:32.786 nvme0n1 00:36:32.786 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.786 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:36:32.786 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.786 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:32.786 [ 00:36:32.786 { 00:36:32.786 "aliases": [ 00:36:32.786 "727dfc12-4e35-45f6-9a68-baf7c8102810" 00:36:32.786 ], 00:36:32.786 "assigned_rate_limits": { 00:36:32.786 "r_mbytes_per_sec": 0, 00:36:32.786 "rw_ios_per_sec": 0, 00:36:32.786 "rw_mbytes_per_sec": 0, 00:36:32.786 "w_mbytes_per_sec": 0 00:36:32.786 }, 00:36:32.786 "block_size": 512, 00:36:32.786 "claimed": false, 00:36:32.786 "driver_specific": { 00:36:32.786 "mp_policy": "active_passive", 00:36:32.786 "nvme": [ 00:36:32.786 { 00:36:32.786 "ctrlr_data": { 00:36:32.786 "ana_reporting": false, 00:36:32.786 "cntlid": 1, 00:36:32.786 "firmware_revision": "25.01", 00:36:32.786 "model_number": "SPDK bdev Controller", 00:36:32.786 "multi_ctrlr": true, 00:36:32.786 "oacs": { 00:36:32.786 "firmware": 0, 00:36:32.786 "format": 0, 00:36:32.786 "ns_manage": 0, 00:36:32.786 "security": 0 00:36:32.786 }, 00:36:32.786 "serial_number": "00000000000000000000", 00:36:32.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:32.786 "vendor_id": "0x8086" 00:36:32.786 }, 00:36:32.786 "ns_data": { 00:36:32.786 "can_share": true, 00:36:32.786 "id": 1 00:36:32.786 }, 00:36:32.786 "trid": { 00:36:32.786 "adrfam": "IPv4", 00:36:32.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:32.786 "traddr": "10.0.0.3", 00:36:32.786 "trsvcid": "4420", 00:36:32.786 "trtype": "TCP" 00:36:32.786 }, 00:36:32.786 "vs": { 00:36:32.786 "nvme_version": "1.3" 00:36:32.786 } 00:36:32.786 } 00:36:32.786 ] 00:36:32.786 }, 00:36:32.786 "memory_domains": [ 00:36:32.786 { 00:36:32.786 "dma_device_id": "system", 00:36:32.786 "dma_device_type": 1 00:36:32.786 } 00:36:32.786 ], 00:36:32.786 "name": "nvme0n1", 00:36:32.786 "num_blocks": 2097152, 00:36:32.786 "numa_id": -1, 00:36:32.786 "product_name": "NVMe disk", 00:36:32.786 "supported_io_types": { 00:36:32.786 "abort": true, 00:36:32.786 "compare": true, 00:36:32.786 "compare_and_write": true, 00:36:32.786 "copy": true, 00:36:32.786 "flush": true, 00:36:32.786 "get_zone_info": false, 00:36:32.786 "nvme_admin": true, 00:36:32.786 "nvme_io": true, 00:36:32.786 "nvme_io_md": false, 00:36:32.786 "nvme_iov_md": false, 00:36:32.786 "read": true, 00:36:32.786 "reset": true, 00:36:32.786 "seek_data": false, 00:36:32.786 "seek_hole": false, 00:36:32.786 "unmap": false, 00:36:32.786 "write": true, 00:36:32.786 "write_zeroes": true, 00:36:32.786 "zcopy": false, 00:36:32.786 "zone_append": false, 00:36:32.786 "zone_management": false 00:36:32.786 }, 00:36:32.786 "uuid": "727dfc12-4e35-45f6-9a68-baf7c8102810", 00:36:32.786 "zoned": false 00:36:32.786 } 00:36:32.787 ] 00:36:32.787 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:32.787 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:36:32.787 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:32.787 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:32.787 [2024-11-20 05:47:52.625395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:32.787 [2024-11-20 05:47:52.625680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d84f0 (9): Bad file descriptor 00:36:33.045 [2024-11-20 05:47:52.757574] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:33.045 [ 00:36:33.045 { 00:36:33.045 "aliases": [ 00:36:33.045 "727dfc12-4e35-45f6-9a68-baf7c8102810" 00:36:33.045 ], 00:36:33.045 "assigned_rate_limits": { 00:36:33.045 "r_mbytes_per_sec": 0, 00:36:33.045 "rw_ios_per_sec": 0, 00:36:33.045 "rw_mbytes_per_sec": 0, 00:36:33.045 "w_mbytes_per_sec": 0 00:36:33.045 }, 00:36:33.045 "block_size": 512, 00:36:33.045 "claimed": false, 00:36:33.045 "driver_specific": { 00:36:33.045 "mp_policy": "active_passive", 00:36:33.045 "nvme": [ 00:36:33.045 { 00:36:33.045 "ctrlr_data": { 00:36:33.045 "ana_reporting": false, 00:36:33.045 "cntlid": 2, 00:36:33.045 "firmware_revision": "25.01", 00:36:33.045 "model_number": "SPDK bdev Controller", 00:36:33.045 "multi_ctrlr": true, 00:36:33.045 "oacs": { 00:36:33.045 "firmware": 0, 00:36:33.045 "format": 0, 00:36:33.045 "ns_manage": 0, 00:36:33.045 "security": 0 00:36:33.045 }, 00:36:33.045 "serial_number": "00000000000000000000", 00:36:33.045 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:33.045 "vendor_id": "0x8086" 00:36:33.045 }, 00:36:33.045 "ns_data": { 00:36:33.045 "can_share": true, 00:36:33.045 "id": 1 00:36:33.045 }, 00:36:33.045 "trid": { 00:36:33.045 "adrfam": "IPv4", 00:36:33.045 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:33.045 "traddr": "10.0.0.3", 00:36:33.045 "trsvcid": "4420", 00:36:33.045 "trtype": "TCP" 00:36:33.045 }, 00:36:33.045 "vs": { 00:36:33.045 "nvme_version": "1.3" 00:36:33.045 } 00:36:33.045 } 00:36:33.045 ] 00:36:33.045 }, 00:36:33.045 "memory_domains": [ 00:36:33.045 { 00:36:33.045 "dma_device_id": "system", 00:36:33.045 "dma_device_type": 1 00:36:33.045 } 00:36:33.045 ], 00:36:33.045 "name": "nvme0n1", 00:36:33.045 "num_blocks": 2097152, 00:36:33.045 "numa_id": -1, 00:36:33.045 "product_name": "NVMe disk", 00:36:33.045 "supported_io_types": { 00:36:33.045 "abort": true, 00:36:33.045 "compare": true, 00:36:33.045 "compare_and_write": true, 00:36:33.045 "copy": true, 00:36:33.045 "flush": true, 00:36:33.045 "get_zone_info": false, 00:36:33.045 "nvme_admin": true, 00:36:33.045 "nvme_io": true, 00:36:33.045 "nvme_io_md": false, 00:36:33.045 "nvme_iov_md": false, 00:36:33.045 "read": true, 00:36:33.045 "reset": true, 00:36:33.045 "seek_data": false, 00:36:33.045 "seek_hole": false, 00:36:33.045 "unmap": false, 00:36:33.045 "write": true, 00:36:33.045 "write_zeroes": true, 00:36:33.045 "zcopy": false, 00:36:33.045 "zone_append": false, 00:36:33.045 "zone_management": false 00:36:33.045 }, 00:36:33.045 "uuid": "727dfc12-4e35-45f6-9a68-baf7c8102810", 00:36:33.045 "zoned": false 00:36:33.045 } 00:36:33.045 ] 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.2xycd3BhrL 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.2xycd3BhrL 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.2xycd3BhrL 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:33.045 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.046 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:36:33.046 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.046 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:33.046 [2024-11-20 05:47:52.842034] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:33.046 [2024-11-20 05:47:52.842186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:36:33.046 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.046 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:36:33.046 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.046 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:33.046 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.046 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:36:33.046 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.046 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:33.046 [2024-11-20 05:47:52.858039] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:33.046 nvme0n1 00:36:33.046 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.046 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:36:33.046 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.046 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:33.046 [ 00:36:33.046 { 00:36:33.046 "aliases": [ 00:36:33.046 "727dfc12-4e35-45f6-9a68-baf7c8102810" 00:36:33.046 ], 00:36:33.046 "assigned_rate_limits": { 00:36:33.046 "r_mbytes_per_sec": 0, 00:36:33.046 "rw_ios_per_sec": 0, 00:36:33.046 "rw_mbytes_per_sec": 0, 00:36:33.046 "w_mbytes_per_sec": 0 00:36:33.046 }, 00:36:33.046 "block_size": 512, 00:36:33.046 "claimed": false, 00:36:33.046 "driver_specific": { 00:36:33.046 "mp_policy": "active_passive", 00:36:33.046 "nvme": [ 00:36:33.046 { 00:36:33.046 "ctrlr_data": { 00:36:33.046 "ana_reporting": false, 00:36:33.046 "cntlid": 3, 00:36:33.046 "firmware_revision": "25.01", 00:36:33.046 "model_number": "SPDK bdev Controller", 00:36:33.046 "multi_ctrlr": true, 00:36:33.046 "oacs": { 00:36:33.046 "firmware": 0, 00:36:33.046 "format": 0, 00:36:33.046 "ns_manage": 0, 00:36:33.046 "security": 0 00:36:33.046 }, 00:36:33.046 "serial_number": "00000000000000000000", 00:36:33.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:33.046 "vendor_id": "0x8086" 00:36:33.046 }, 00:36:33.046 "ns_data": { 00:36:33.046 "can_share": true, 00:36:33.046 "id": 1 00:36:33.046 }, 00:36:33.046 "trid": { 00:36:33.046 "adrfam": "IPv4", 00:36:33.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:33.046 "traddr": "10.0.0.3", 00:36:33.046 "trsvcid": "4421", 00:36:33.046 "trtype": "TCP" 00:36:33.046 }, 00:36:33.046 "vs": { 00:36:33.046 "nvme_version": "1.3" 00:36:33.046 } 00:36:33.046 } 00:36:33.046 ] 00:36:33.046 }, 00:36:33.046 "memory_domains": [ 00:36:33.046 { 00:36:33.046 "dma_device_id": "system", 00:36:33.046 "dma_device_type": 1 00:36:33.046 } 00:36:33.046 ], 00:36:33.046 "name": "nvme0n1", 00:36:33.046 "num_blocks": 2097152, 00:36:33.046 "numa_id": -1, 00:36:33.046 "product_name": "NVMe disk", 00:36:33.046 "supported_io_types": { 00:36:33.046 "abort": true, 00:36:33.046 "compare": true, 00:36:33.046 "compare_and_write": true, 00:36:33.046 "copy": true, 00:36:33.046 "flush": true, 00:36:33.046 "get_zone_info": false, 00:36:33.046 "nvme_admin": true, 00:36:33.046 "nvme_io": true, 00:36:33.046 "nvme_io_md": false, 00:36:33.046 "nvme_iov_md": false, 00:36:33.046 "read": true, 00:36:33.046 "reset": true, 00:36:33.046 "seek_data": false, 00:36:33.046 "seek_hole": false, 00:36:33.046 "unmap": false, 00:36:33.046 "write": true, 00:36:33.303 "write_zeroes": true, 00:36:33.303 "zcopy": false, 00:36:33.303 "zone_append": false, 00:36:33.303 "zone_management": false 00:36:33.303 }, 00:36:33.303 "uuid": "727dfc12-4e35-45f6-9a68-baf7c8102810", 00:36:33.303 "zoned": false 00:36:33.303 } 00:36:33.303 ] 00:36:33.303 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.303 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.303 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:33.303 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:33.303 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:33.303 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.2xycd3BhrL 00:36:33.303 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:36:33.303 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:36:33.303 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:33.303 05:47:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:33.303 rmmod nvme_tcp 00:36:33.303 rmmod nvme_fabrics 00:36:33.303 rmmod nvme_keyring 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 87150 ']' 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 87150 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 87150 ']' 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 87150 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87150 00:36:33.303 killing process with pid 87150 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87150' 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 87150 00:36:33.303 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 87150 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:33.561 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:36:33.819 00:36:33.819 real 0m2.472s 00:36:33.819 user 0m1.737s 00:36:33.819 sys 0m0.875s 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:33.819 ************************************ 00:36:33.819 END TEST nvmf_async_init 00:36:33.819 ************************************ 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.819 ************************************ 00:36:33.819 START TEST dma 00:36:33.819 ************************************ 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:36:33.819 * Looking for test storage... 00:36:33.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:33.819 05:47:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:34.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.078 --rc genhtml_branch_coverage=1 00:36:34.078 --rc genhtml_function_coverage=1 00:36:34.078 --rc genhtml_legend=1 00:36:34.078 --rc geninfo_all_blocks=1 00:36:34.078 --rc geninfo_unexecuted_blocks=1 00:36:34.078 00:36:34.078 ' 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:34.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.078 --rc genhtml_branch_coverage=1 00:36:34.078 --rc genhtml_function_coverage=1 00:36:34.078 --rc genhtml_legend=1 00:36:34.078 --rc geninfo_all_blocks=1 00:36:34.078 --rc geninfo_unexecuted_blocks=1 00:36:34.078 00:36:34.078 ' 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:34.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.078 --rc genhtml_branch_coverage=1 00:36:34.078 --rc genhtml_function_coverage=1 00:36:34.078 --rc genhtml_legend=1 00:36:34.078 --rc geninfo_all_blocks=1 00:36:34.078 --rc geninfo_unexecuted_blocks=1 00:36:34.078 00:36:34.078 ' 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:34.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.078 --rc genhtml_branch_coverage=1 00:36:34.078 --rc genhtml_function_coverage=1 00:36:34.078 --rc genhtml_legend=1 00:36:34.078 --rc geninfo_all_blocks=1 00:36:34.078 --rc geninfo_unexecuted_blocks=1 00:36:34.078 00:36:34.078 ' 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:34.078 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:36:34.078 00:36:34.078 real 0m0.238s 00:36:34.078 user 0m0.139s 00:36:34.078 sys 0m0.106s 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:36:34.078 ************************************ 00:36:34.078 END TEST dma 00:36:34.078 ************************************ 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:34.078 05:47:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.078 ************************************ 00:36:34.078 START TEST nvmf_identify 00:36:34.078 ************************************ 00:36:34.079 05:47:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:36:34.338 * Looking for test storage... 00:36:34.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:34.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.338 --rc genhtml_branch_coverage=1 00:36:34.338 --rc genhtml_function_coverage=1 00:36:34.338 --rc genhtml_legend=1 00:36:34.338 --rc geninfo_all_blocks=1 00:36:34.338 --rc geninfo_unexecuted_blocks=1 00:36:34.338 00:36:34.338 ' 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:34.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.338 --rc genhtml_branch_coverage=1 00:36:34.338 --rc genhtml_function_coverage=1 00:36:34.338 --rc genhtml_legend=1 00:36:34.338 --rc geninfo_all_blocks=1 00:36:34.338 --rc geninfo_unexecuted_blocks=1 00:36:34.338 00:36:34.338 ' 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:34.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.338 --rc genhtml_branch_coverage=1 00:36:34.338 --rc genhtml_function_coverage=1 00:36:34.338 --rc genhtml_legend=1 00:36:34.338 --rc geninfo_all_blocks=1 00:36:34.338 --rc geninfo_unexecuted_blocks=1 00:36:34.338 00:36:34.338 ' 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:34.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.338 --rc genhtml_branch_coverage=1 00:36:34.338 --rc genhtml_function_coverage=1 00:36:34.338 --rc genhtml_legend=1 00:36:34.338 --rc geninfo_all_blocks=1 00:36:34.338 --rc geninfo_unexecuted_blocks=1 00:36:34.338 00:36:34.338 ' 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:34.338 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:34.339 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:34.339 Cannot find device "nvmf_init_br" 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:34.339 Cannot find device "nvmf_init_br2" 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:34.339 Cannot find device "nvmf_tgt_br" 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:34.339 Cannot find device "nvmf_tgt_br2" 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:34.339 Cannot find device "nvmf_init_br" 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:34.339 Cannot find device "nvmf_init_br2" 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:34.339 Cannot find device "nvmf_tgt_br" 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:34.339 Cannot find device "nvmf_tgt_br2" 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:36:34.339 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:34.598 Cannot find device "nvmf_br" 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:34.598 Cannot find device "nvmf_init_if" 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:34.598 Cannot find device "nvmf_init_if2" 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:34.598 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:34.598 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:34.598 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:34.857 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:34.857 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:36:34.857 00:36:34.857 --- 10.0.0.3 ping statistics --- 00:36:34.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.857 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:34.857 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:34.857 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:36:34.857 00:36:34.857 --- 10.0.0.4 ping statistics --- 00:36:34.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.857 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:34.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:34.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:36:34.857 00:36:34.857 --- 10.0.0.1 ping statistics --- 00:36:34.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.857 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:34.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:34.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:36:34.857 00:36:34.857 --- 10.0.0.2 ping statistics --- 00:36:34.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.857 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=87466 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 87466 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 87466 ']' 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:34.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:34.857 05:47:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:34.857 [2024-11-20 05:47:54.686921] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:34.857 [2024-11-20 05:47:54.687014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:35.116 [2024-11-20 05:47:54.847843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:35.116 [2024-11-20 05:47:54.910781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:35.116 [2024-11-20 05:47:54.910846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:35.116 [2024-11-20 05:47:54.910861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:35.116 [2024-11-20 05:47:54.910874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:35.116 [2024-11-20 05:47:54.910885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:35.116 [2024-11-20 05:47:54.911971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.116 [2024-11-20 05:47:54.912079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:35.116 [2024-11-20 05:47:54.912139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:35.116 [2024-11-20 05:47:54.912143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:36.053 [2024-11-20 05:47:55.660522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:36.053 Malloc0 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:36.053 [2024-11-20 05:47:55.774753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:36.053 [ 00:36:36.053 { 00:36:36.053 "allow_any_host": true, 00:36:36.053 "hosts": [], 00:36:36.053 "listen_addresses": [ 00:36:36.053 { 00:36:36.053 "adrfam": "IPv4", 00:36:36.053 "traddr": "10.0.0.3", 00:36:36.053 "trsvcid": "4420", 00:36:36.053 "trtype": "TCP" 00:36:36.053 } 00:36:36.053 ], 00:36:36.053 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:36.053 "subtype": "Discovery" 00:36:36.053 }, 00:36:36.053 { 00:36:36.053 "allow_any_host": true, 00:36:36.053 "hosts": [], 00:36:36.053 "listen_addresses": [ 00:36:36.053 { 00:36:36.053 "adrfam": "IPv4", 00:36:36.053 "traddr": "10.0.0.3", 00:36:36.053 "trsvcid": "4420", 00:36:36.053 "trtype": "TCP" 00:36:36.053 } 00:36:36.053 ], 00:36:36.053 "max_cntlid": 65519, 00:36:36.053 "max_namespaces": 32, 00:36:36.053 "min_cntlid": 1, 00:36:36.053 "model_number": "SPDK bdev Controller", 00:36:36.053 "namespaces": [ 00:36:36.053 { 00:36:36.053 "bdev_name": "Malloc0", 00:36:36.053 "eui64": "ABCDEF0123456789", 00:36:36.053 "name": "Malloc0", 00:36:36.053 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:36:36.053 "nsid": 1, 00:36:36.053 "uuid": "166c7c94-c028-49d1-9566-551ec47b7e52" 00:36:36.053 } 00:36:36.053 ], 00:36:36.053 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:36.053 "serial_number": "SPDK00000000000001", 00:36:36.053 "subtype": "NVMe" 00:36:36.053 } 00:36:36.053 ] 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.053 05:47:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:36:36.053 [2024-11-20 05:47:55.833667] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:36.053 [2024-11-20 05:47:55.833890] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87520 ] 00:36:36.316 [2024-11-20 05:47:55.986297] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:36:36.316 [2024-11-20 05:47:55.986352] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:36:36.316 [2024-11-20 05:47:55.986357] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:36:36.316 [2024-11-20 05:47:55.986372] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:36:36.316 [2024-11-20 05:47:55.986382] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:36:36.316 [2024-11-20 05:47:55.990647] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:36:36.316 [2024-11-20 05:47:55.990712] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc82d90 0 00:36:36.316 [2024-11-20 05:47:55.998467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:36:36.316 [2024-11-20 05:47:55.998488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:36:36.316 [2024-11-20 05:47:55.998493] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:36:36.316 [2024-11-20 05:47:55.998497] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:36:36.316 [2024-11-20 05:47:55.998524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.316 [2024-11-20 05:47:55.998530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.316 [2024-11-20 05:47:55.998534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc82d90) 00:36:36.316 [2024-11-20 05:47:55.998547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:36:36.316 [2024-11-20 05:47:55.998571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3600, cid 0, qid 0 00:36:36.316 [2024-11-20 05:47:56.006460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.316 [2024-11-20 05:47:56.006476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.316 [2024-11-20 05:47:56.006481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.316 [2024-11-20 05:47:56.006485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3600) on tqpair=0xc82d90 00:36:36.316 [2024-11-20 05:47:56.006497] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:36:36.316 [2024-11-20 05:47:56.006504] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:36:36.316 [2024-11-20 05:47:56.006510] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:36:36.316 [2024-11-20 05:47:56.006523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.316 [2024-11-20 05:47:56.006528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.316 [2024-11-20 05:47:56.006531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc82d90) 00:36:36.316 [2024-11-20 05:47:56.006539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.316 [2024-11-20 05:47:56.006561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3600, cid 0, qid 0 00:36:36.316 [2024-11-20 05:47:56.006619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.316 [2024-11-20 05:47:56.006625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.316 [2024-11-20 05:47:56.006629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.316 [2024-11-20 05:47:56.006633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3600) on tqpair=0xc82d90 00:36:36.316 [2024-11-20 05:47:56.006638] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:36:36.316 [2024-11-20 05:47:56.006645] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:36:36.316 [2024-11-20 05:47:56.006652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.316 [2024-11-20 05:47:56.006656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.316 [2024-11-20 05:47:56.006660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc82d90) 00:36:36.316 [2024-11-20 05:47:56.006667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.316 [2024-11-20 05:47:56.006682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3600, cid 0, qid 0 00:36:36.316 [2024-11-20 05:47:56.006722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.316 [2024-11-20 05:47:56.006728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.316 [2024-11-20 05:47:56.006731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.316 [2024-11-20 05:47:56.006735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3600) on tqpair=0xc82d90 00:36:36.316 [2024-11-20 05:47:56.006740] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:36:36.316 [2024-11-20 05:47:56.006748] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:36:36.316 [2024-11-20 05:47:56.006755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.316 [2024-11-20 05:47:56.006759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.316 [2024-11-20 05:47:56.006762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc82d90) 00:36:36.316 [2024-11-20 05:47:56.006769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.317 [2024-11-20 05:47:56.006782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3600, cid 0, qid 0 00:36:36.317 [2024-11-20 05:47:56.006827] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.317 [2024-11-20 05:47:56.006833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.317 [2024-11-20 05:47:56.006836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.006840] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3600) on tqpair=0xc82d90 00:36:36.317 [2024-11-20 05:47:56.006845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:36:36.317 [2024-11-20 05:47:56.006854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.006858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.006862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc82d90) 00:36:36.317 [2024-11-20 05:47:56.006868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.317 [2024-11-20 05:47:56.006881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3600, cid 0, qid 0 00:36:36.317 [2024-11-20 05:47:56.006924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.317 [2024-11-20 05:47:56.006930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.317 [2024-11-20 05:47:56.006934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.006938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3600) on tqpair=0xc82d90 00:36:36.317 [2024-11-20 05:47:56.006942] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:36:36.317 [2024-11-20 05:47:56.006948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:36:36.317 [2024-11-20 05:47:56.006955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:36:36.317 [2024-11-20 05:47:56.007065] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:36:36.317 [2024-11-20 05:47:56.007070] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:36:36.317 [2024-11-20 05:47:56.007079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007086] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc82d90) 00:36:36.317 [2024-11-20 05:47:56.007093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.317 [2024-11-20 05:47:56.007107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3600, cid 0, qid 0 00:36:36.317 [2024-11-20 05:47:56.007163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.317 [2024-11-20 05:47:56.007168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.317 [2024-11-20 05:47:56.007172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3600) on tqpair=0xc82d90 00:36:36.317 [2024-11-20 05:47:56.007181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:36:36.317 [2024-11-20 05:47:56.007189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc82d90) 00:36:36.317 [2024-11-20 05:47:56.007203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.317 [2024-11-20 05:47:56.007217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3600, cid 0, qid 0 00:36:36.317 [2024-11-20 05:47:56.007257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.317 [2024-11-20 05:47:56.007263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.317 [2024-11-20 05:47:56.007267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3600) on tqpair=0xc82d90 00:36:36.317 [2024-11-20 05:47:56.007275] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:36:36.317 [2024-11-20 05:47:56.007281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:36:36.317 [2024-11-20 05:47:56.007288] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:36:36.317 [2024-11-20 05:47:56.007303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:36:36.317 [2024-11-20 05:47:56.007311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc82d90) 00:36:36.317 [2024-11-20 05:47:56.007322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.317 [2024-11-20 05:47:56.007335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3600, cid 0, qid 0 00:36:36.317 [2024-11-20 05:47:56.007410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:36.317 [2024-11-20 05:47:56.007416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:36.317 [2024-11-20 05:47:56.007420] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007424] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc82d90): datao=0, datal=4096, cccid=0 00:36:36.317 [2024-11-20 05:47:56.007429] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcc3600) on tqpair(0xc82d90): expected_datao=0, payload_size=4096 00:36:36.317 [2024-11-20 05:47:56.007434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007442] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007456] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.317 [2024-11-20 05:47:56.007470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.317 [2024-11-20 05:47:56.007474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3600) on tqpair=0xc82d90 00:36:36.317 [2024-11-20 05:47:56.007486] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:36:36.317 [2024-11-20 05:47:56.007491] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:36:36.317 [2024-11-20 05:47:56.007496] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:36:36.317 [2024-11-20 05:47:56.007501] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:36:36.317 [2024-11-20 05:47:56.007507] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:36:36.317 [2024-11-20 05:47:56.007512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:36:36.317 [2024-11-20 05:47:56.007524] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:36:36.317 [2024-11-20 05:47:56.007531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc82d90) 00:36:36.317 [2024-11-20 05:47:56.007545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:36.317 [2024-11-20 05:47:56.007561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3600, cid 0, qid 0 00:36:36.317 [2024-11-20 05:47:56.007613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.317 [2024-11-20 05:47:56.007618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.317 [2024-11-20 05:47:56.007622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007626] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3600) on tqpair=0xc82d90 00:36:36.317 [2024-11-20 05:47:56.007633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007637] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc82d90) 00:36:36.317 [2024-11-20 05:47:56.007647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:36.317 [2024-11-20 05:47:56.007653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc82d90) 00:36:36.317 [2024-11-20 05:47:56.007667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:36.317 [2024-11-20 05:47:56.007673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc82d90) 00:36:36.317 [2024-11-20 05:47:56.007686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:36.317 [2024-11-20 05:47:56.007692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.317 [2024-11-20 05:47:56.007705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:36.317 [2024-11-20 05:47:56.007710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:36:36.317 [2024-11-20 05:47:56.007721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:36:36.317 [2024-11-20 05:47:56.007728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc82d90) 00:36:36.317 [2024-11-20 05:47:56.007738] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.317 [2024-11-20 05:47:56.007753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3600, cid 0, qid 0 00:36:36.317 [2024-11-20 05:47:56.007759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3780, cid 1, qid 0 00:36:36.317 [2024-11-20 05:47:56.007763] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3900, cid 2, qid 0 00:36:36.317 [2024-11-20 05:47:56.007768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.317 [2024-11-20 05:47:56.007773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3c00, cid 4, qid 0 00:36:36.317 [2024-11-20 05:47:56.007843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.317 [2024-11-20 05:47:56.007849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.317 [2024-11-20 05:47:56.007853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3c00) on tqpair=0xc82d90 00:36:36.317 [2024-11-20 05:47:56.007862] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:36:36.317 [2024-11-20 05:47:56.007868] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:36:36.317 [2024-11-20 05:47:56.007877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc82d90) 00:36:36.317 [2024-11-20 05:47:56.007887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.317 [2024-11-20 05:47:56.007900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3c00, cid 4, qid 0 00:36:36.317 [2024-11-20 05:47:56.007948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:36.317 [2024-11-20 05:47:56.007954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:36.317 [2024-11-20 05:47:56.007958] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007962] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc82d90): datao=0, datal=4096, cccid=4 00:36:36.317 [2024-11-20 05:47:56.007966] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcc3c00) on tqpair(0xc82d90): expected_datao=0, payload_size=4096 00:36:36.317 [2024-11-20 05:47:56.007971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007977] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007981] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.007989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.317 [2024-11-20 05:47:56.007994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.317 [2024-11-20 05:47:56.007998] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.008002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3c00) on tqpair=0xc82d90 00:36:36.317 [2024-11-20 05:47:56.008013] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:36:36.317 [2024-11-20 05:47:56.008038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.008043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc82d90) 00:36:36.317 [2024-11-20 05:47:56.008049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.317 [2024-11-20 05:47:56.008056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.008060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.008064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc82d90) 00:36:36.317 [2024-11-20 05:47:56.008070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:36:36.317 [2024-11-20 05:47:56.008090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3c00, cid 4, qid 0 00:36:36.317 [2024-11-20 05:47:56.008096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3d80, cid 5, qid 0 00:36:36.317 [2024-11-20 05:47:56.008178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:36.317 [2024-11-20 05:47:56.008184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:36.317 [2024-11-20 05:47:56.008187] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.008191] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc82d90): datao=0, datal=1024, cccid=4 00:36:36.317 [2024-11-20 05:47:56.008196] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcc3c00) on tqpair(0xc82d90): expected_datao=0, payload_size=1024 00:36:36.317 [2024-11-20 05:47:56.008201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.008207] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.008210] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.008216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.317 [2024-11-20 05:47:56.008221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.317 [2024-11-20 05:47:56.008225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.008229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3d80) on tqpair=0xc82d90 00:36:36.317 [2024-11-20 05:47:56.053460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.317 [2024-11-20 05:47:56.053478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.317 [2024-11-20 05:47:56.053483] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.053487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3c00) on tqpair=0xc82d90 00:36:36.317 [2024-11-20 05:47:56.053499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.317 [2024-11-20 05:47:56.053503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc82d90) 00:36:36.317 [2024-11-20 05:47:56.053511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.317 [2024-11-20 05:47:56.053536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3c00, cid 4, qid 0 00:36:36.317 [2024-11-20 05:47:56.053592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:36.317 [2024-11-20 05:47:56.053598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:36.318 [2024-11-20 05:47:56.053602] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:36.318 [2024-11-20 05:47:56.053606] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc82d90): datao=0, datal=3072, cccid=4 00:36:36.318 [2024-11-20 05:47:56.053611] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcc3c00) on tqpair(0xc82d90): expected_datao=0, payload_size=3072 00:36:36.318 [2024-11-20 05:47:56.053616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.318 [2024-11-20 05:47:56.053622] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:36.318 [2024-11-20 05:47:56.053626] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:36.318 [2024-11-20 05:47:56.053634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.318 [2024-11-20 05:47:56.053640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.318 [2024-11-20 05:47:56.053643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.318 [2024-11-20 05:47:56.053647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3c00) on tqpair=0xc82d90 00:36:36.318 [2024-11-20 05:47:56.053656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.318 [2024-11-20 05:47:56.053660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc82d90) 00:36:36.318 [2024-11-20 05:47:56.053666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.318 [2024-11-20 05:47:56.053685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3c00, cid 4, qid 0 00:36:36.318 [2024-11-20 05:47:56.053741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:36.318 [2024-11-20 05:47:56.053746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:36.318 [2024-11-20 05:47:56.053750] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:36.318 [2024-11-20 05:47:56.053754] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc82d90): datao=0, datal=8, cccid=4 00:36:36.318 [2024-11-20 05:47:56.053759] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcc3c00) on tqpair(0xc82d90): expected_datao=0, payload_size=8 00:36:36.318 [2024-11-20 05:47:56.053763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.318 [2024-11-20 05:47:56.053769] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:36.318 [2024-11-20 05:47:56.053773] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:36.318 ===================================================== 00:36:36.318 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:36.318 ===================================================== 00:36:36.318 Controller Capabilities/Features 00:36:36.318 ================================ 00:36:36.318 Vendor ID: 0000 00:36:36.318 Subsystem Vendor ID: 0000 00:36:36.318 Serial Number: .................... 00:36:36.318 Model Number: ........................................ 00:36:36.318 Firmware Version: 25.01 00:36:36.318 Recommended Arb Burst: 0 00:36:36.318 IEEE OUI Identifier: 00 00 00 00:36:36.318 Multi-path I/O 00:36:36.318 May have multiple subsystem ports: No 00:36:36.318 May have multiple controllers: No 00:36:36.318 Associated with SR-IOV VF: No 00:36:36.318 Max Data Transfer Size: 131072 00:36:36.318 Max Number of Namespaces: 0 00:36:36.318 Max Number of I/O Queues: 1024 00:36:36.318 NVMe Specification Version (VS): 1.3 00:36:36.318 NVMe Specification Version (Identify): 1.3 00:36:36.318 Maximum Queue Entries: 128 00:36:36.318 Contiguous Queues Required: Yes 00:36:36.318 Arbitration Mechanisms Supported 00:36:36.318 Weighted Round Robin: Not Supported 00:36:36.318 Vendor Specific: Not Supported 00:36:36.318 Reset Timeout: 15000 ms 00:36:36.318 Doorbell Stride: 4 bytes 00:36:36.318 NVM Subsystem Reset: Not Supported 00:36:36.318 Command Sets Supported 00:36:36.318 NVM Command Set: Supported 00:36:36.318 Boot Partition: Not Supported 00:36:36.318 Memory Page Size Minimum: 4096 bytes 00:36:36.318 Memory Page Size Maximum: 4096 bytes 00:36:36.318 Persistent Memory Region: Not Supported 00:36:36.318 Optional Asynchronous Events Supported 00:36:36.318 Namespace Attribute Notices: Not Supported 00:36:36.318 Firmware Activation Notices: Not Supported 00:36:36.318 ANA Change Notices: Not Supported 00:36:36.318 PLE Aggregate Log Change Notices: Not Supported 00:36:36.318 LBA Status Info Alert Notices: Not Supported 00:36:36.318 EGE Aggregate Log Change Notices: Not Supported 00:36:36.318 Normal NVM Subsystem Shutdown event: Not Supported 00:36:36.318 Zone Descriptor Change Notices: Not Supported 00:36:36.318 Discovery Log Change Notices: Supported 00:36:36.318 Controller Attributes 00:36:36.318 128-bit Host Identifier: Not Supported 00:36:36.318 Non-Operational Permissive Mode: Not Supported 00:36:36.318 NVM Sets: Not Supported 00:36:36.318 Read Recovery Levels: Not Supported 00:36:36.318 Endurance Groups: Not Supported 00:36:36.318 Predictable Latency Mode: Not Supported 00:36:36.318 Traffic Based Keep ALive: Not Supported 00:36:36.318 Namespace Granularity: Not Supported 00:36:36.318 SQ Associations: Not Supported 00:36:36.318 UUID List: Not Supported 00:36:36.318 Multi-Domain Subsystem: Not Supported 00:36:36.318 Fixed Capacity Management: Not Supported 00:36:36.318 Variable Capacity Management: Not Supported 00:36:36.318 Delete Endurance Group: Not Supported 00:36:36.318 Delete NVM Set: Not Supported 00:36:36.318 Extended LBA Formats Supported: Not Supported 00:36:36.318 Flexible Data Placement Supported: Not Supported 00:36:36.318 00:36:36.318 Controller Memory Buffer Support 00:36:36.318 ================================ 00:36:36.318 Supported: No 00:36:36.318 00:36:36.318 Persistent Memory Region Support 00:36:36.318 ================================ 00:36:36.318 Supported: No 00:36:36.318 00:36:36.318 Admin Command Set Attributes 00:36:36.318 ============================ 00:36:36.318 Security Send/Receive: Not Supported 00:36:36.318 Format NVM: Not Supported 00:36:36.318 Firmware Activate/Download: Not Supported 00:36:36.318 Namespace Management: Not Supported 00:36:36.318 Device Self-Test: Not Supported 00:36:36.318 Directives: Not Supported 00:36:36.318 NVMe-MI: Not Supported 00:36:36.318 Virtualization Management: Not Supported 00:36:36.318 Doorbell Buffer Config: Not Supported 00:36:36.318 Get LBA Status Capability: Not Supported 00:36:36.318 Command & Feature Lockdown Capability: Not Supported 00:36:36.318 Abort Command Limit: 1 00:36:36.318 Async Event Request Limit: 4 00:36:36.318 Number of Firmware Slots: N/A 00:36:36.318 Firmware Slot 1 Read-Only: N/A 00:36:36.318 Firmware Activation Without Reset: N/A 00:36:36.318 Multiple Update Detection Support: N/A 00:36:36.318 Firmware Update Granularity: No Information Provided 00:36:36.318 Per-Namespace SMART Log: No 00:36:36.318 Asymmetric Namespace Access Log Page: Not Supported 00:36:36.318 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:36.318 Command Effects Log Page: Not Supported 00:36:36.318 Get Log Page Extended Data: Supported 00:36:36.318 Telemetry Log Pages: Not Supported 00:36:36.318 Persistent Event Log Pages: Not Supported 00:36:36.318 Supported Log Pages Log Page: May Support 00:36:36.318 Commands Supported & Effects Log Page: Not Supported 00:36:36.318 Feature Identifiers & Effects Log Page:May Support 00:36:36.318 NVMe-MI Commands & Effects Log Page: May Support 00:36:36.318 Data Area 4 for Telemetry Log: Not Supported 00:36:36.318 Error Log Page Entries Supported: 128 00:36:36.318 Keep Alive: Not Supported 00:36:36.318 00:36:36.318 NVM Command Set Attributes 00:36:36.318 ========================== 00:36:36.318 Submission Queue Entry Size 00:36:36.318 Max: 1 00:36:36.318 Min: 1 00:36:36.318 Completion Queue Entry Size 00:36:36.318 Max: 1 00:36:36.318 Min: 1 00:36:36.318 Number of Namespaces: 0 00:36:36.318 Compare Command: Not Supported 00:36:36.318 Write Uncorrectable Command: Not Supported 00:36:36.318 Dataset Management Command: Not Supported 00:36:36.318 Write Zeroes Command: Not Supported 00:36:36.318 Set Features Save Field: Not Supported 00:36:36.318 Reservations: Not Supported 00:36:36.318 Timestamp: Not Supported 00:36:36.318 Copy: Not Supported 00:36:36.318 Volatile Write Cache: Not Present 00:36:36.318 Atomic Write Unit (Normal): 1 00:36:36.318 Atomic Write Unit (PFail): 1 00:36:36.318 Atomic Compare & Write Unit: 1 00:36:36.318 Fused Compare & Write: Supported 00:36:36.318 Scatter-Gather List 00:36:36.318 SGL Command Set: Supported 00:36:36.318 SGL Keyed: Supported 00:36:36.318 SGL Bit Bucket Descriptor: Not Supported 00:36:36.318 SGL Metadata Pointer: Not Supported 00:36:36.318 Oversized SGL: Not Supported 00:36:36.318 SGL Metadata Address: Not Supported 00:36:36.318 SGL Offset: Supported 00:36:36.318 Transport SGL Data Block: Not Supported 00:36:36.318 Replay Protected Memory Block: Not Supported 00:36:36.318 00:36:36.318 Firmware Slot Information 00:36:36.318 ========================= 00:36:36.318 Active slot: 0 00:36:36.318 00:36:36.318 00:36:36.318 Error Log 00:36:36.318 ========= 00:36:36.318 00:36:36.318 Active Namespaces 00:36:36.318 ================= 00:36:36.318 Discovery Log Page 00:36:36.318 ================== 00:36:36.318 Generation Counter: 2 00:36:36.318 Number of Records: 2 00:36:36.318 Record Format: 0 00:36:36.318 00:36:36.318 Discovery Log Entry 0 00:36:36.318 ---------------------- 00:36:36.318 Transport Type: 3 (TCP) 00:36:36.318 Address Family: 1 (IPv4) 00:36:36.318 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:36.318 Entry Flags: 00:36:36.318 Duplicate Returned Information: 1 00:36:36.318 Explicit Persistent Connection Support for Discovery: 1 00:36:36.318 Transport Requirements: 00:36:36.318 Secure Channel: Not Required 00:36:36.318 Port ID: 0 (0x0000) 00:36:36.318 Controller ID: 65535 (0xffff) 00:36:36.318 Admin Max SQ Size: 128 00:36:36.318 Transport Service Identifier: 4420 00:36:36.318 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:36.318 Transport Address: 10.0.0.3 00:36:36.318 Discovery Log Entry 1 00:36:36.318 ---------------------- 00:36:36.318 Transport Type: 3 (TCP) 00:36:36.318 Address Family: 1 (IPv4) 00:36:36.318 Subsystem Type: 2 (NVM Subsystem) 00:36:36.318 Entry Flags: 00:36:36.318 Duplicate Returned Information: 0 00:36:36.318 Explicit Persistent Connection Support for Discovery: 0 00:36:36.318 Transport Requirements: 00:36:36.318 Secure Channel: Not Required 00:36:36.318 Port ID: 0 (0x0000) 00:36:36.318 Controller ID: 65535 (0xffff) 00:36:36.318 Admin Max SQ Size: 128 00:36:36.318 Transport Service Identifier: 4420 00:36:36.318 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:36:36.318 Transport Address: 10.0.0.3 [2024-11-20 05:47:56.094499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.318 [2024-11-20 05:47:56.094521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.318 [2024-11-20 05:47:56.094526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.318 [2024-11-20 05:47:56.094530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3c00) on tqpair=0xc82d90 00:36:36.318 [2024-11-20 05:47:56.094623] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:36:36.318 [2024-11-20 05:47:56.094635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3600) on tqpair=0xc82d90 00:36:36.318 [2024-11-20 05:47:56.094642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.318 [2024-11-20 05:47:56.094648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3780) on tqpair=0xc82d90 00:36:36.318 [2024-11-20 05:47:56.094652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.318 [2024-11-20 05:47:56.094658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3900) on tqpair=0xc82d90 00:36:36.318 [2024-11-20 05:47:56.094662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.318 [2024-11-20 05:47:56.094668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.318 [2024-11-20 05:47:56.094672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.318 [2024-11-20 05:47:56.094681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.318 [2024-11-20 05:47:56.094685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.318 [2024-11-20 05:47:56.094689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.318 [2024-11-20 05:47:56.094697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.318 [2024-11-20 05:47:56.094717] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.318 [2024-11-20 05:47:56.094757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.318 [2024-11-20 05:47:56.094763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.318 [2024-11-20 05:47:56.094767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.318 [2024-11-20 05:47:56.094771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.318 [2024-11-20 05:47:56.094778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.318 [2024-11-20 05:47:56.094782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.318 [2024-11-20 05:47:56.094786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.094793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.319 [2024-11-20 05:47:56.094810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.319 [2024-11-20 05:47:56.094867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.319 [2024-11-20 05:47:56.094873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.319 [2024-11-20 05:47:56.094877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.094881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.319 [2024-11-20 05:47:56.094886] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:36:36.319 [2024-11-20 05:47:56.094892] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:36:36.319 [2024-11-20 05:47:56.094900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.094904] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.094908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.094914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.319 [2024-11-20 05:47:56.094928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.319 [2024-11-20 05:47:56.094974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.319 [2024-11-20 05:47:56.094980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.319 [2024-11-20 05:47:56.094984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.094988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.319 [2024-11-20 05:47:56.094997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.095011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.319 [2024-11-20 05:47:56.095024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.319 [2024-11-20 05:47:56.095064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.319 [2024-11-20 05:47:56.095070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.319 [2024-11-20 05:47:56.095074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.319 [2024-11-20 05:47:56.095086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.095100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.319 [2024-11-20 05:47:56.095113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.319 [2024-11-20 05:47:56.095157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.319 [2024-11-20 05:47:56.095163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.319 [2024-11-20 05:47:56.095167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.319 [2024-11-20 05:47:56.095179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095187] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.095193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.319 [2024-11-20 05:47:56.095207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.319 [2024-11-20 05:47:56.095247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.319 [2024-11-20 05:47:56.095253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.319 [2024-11-20 05:47:56.095257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.319 [2024-11-20 05:47:56.095269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.095283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.319 [2024-11-20 05:47:56.095297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.319 [2024-11-20 05:47:56.095340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.319 [2024-11-20 05:47:56.095346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.319 [2024-11-20 05:47:56.095350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.319 [2024-11-20 05:47:56.095362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095370] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.095376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.319 [2024-11-20 05:47:56.095390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.319 [2024-11-20 05:47:56.095438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.319 [2024-11-20 05:47:56.095444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.319 [2024-11-20 05:47:56.095457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.319 [2024-11-20 05:47:56.095469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.095484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.319 [2024-11-20 05:47:56.095498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.319 [2024-11-20 05:47:56.095537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.319 [2024-11-20 05:47:56.095543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.319 [2024-11-20 05:47:56.095546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.319 [2024-11-20 05:47:56.095559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.095573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.319 [2024-11-20 05:47:56.095587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.319 [2024-11-20 05:47:56.095628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.319 [2024-11-20 05:47:56.095634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.319 [2024-11-20 05:47:56.095637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.319 [2024-11-20 05:47:56.095650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.095664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.319 [2024-11-20 05:47:56.095677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.319 [2024-11-20 05:47:56.095723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.319 [2024-11-20 05:47:56.095729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.319 [2024-11-20 05:47:56.095732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.319 [2024-11-20 05:47:56.095745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.095759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.319 [2024-11-20 05:47:56.095772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.319 [2024-11-20 05:47:56.095811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.319 [2024-11-20 05:47:56.095816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.319 [2024-11-20 05:47:56.095820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.319 [2024-11-20 05:47:56.095832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.095846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.319 [2024-11-20 05:47:56.095859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.319 [2024-11-20 05:47:56.095901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.319 [2024-11-20 05:47:56.095907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.319 [2024-11-20 05:47:56.095911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.319 [2024-11-20 05:47:56.095923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.095931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.095937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.319 [2024-11-20 05:47:56.095950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.319 [2024-11-20 05:47:56.095994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.319 [2024-11-20 05:47:56.096000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.319 [2024-11-20 05:47:56.096004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.096008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.319 [2024-11-20 05:47:56.096016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.096020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.096024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.096030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.319 [2024-11-20 05:47:56.096043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.319 [2024-11-20 05:47:56.096081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.319 [2024-11-20 05:47:56.096087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.319 [2024-11-20 05:47:56.096091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.096095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.319 [2024-11-20 05:47:56.096103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.096107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.096111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.096117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.319 [2024-11-20 05:47:56.096130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.319 [2024-11-20 05:47:56.096168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.319 [2024-11-20 05:47:56.096174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.319 [2024-11-20 05:47:56.096178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.096182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.319 [2024-11-20 05:47:56.096190] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.096194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.319 [2024-11-20 05:47:56.096198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.319 [2024-11-20 05:47:56.096204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.320 [2024-11-20 05:47:56.096218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.320 [2024-11-20 05:47:56.096258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.320 [2024-11-20 05:47:56.096264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.320 [2024-11-20 05:47:56.096267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.320 [2024-11-20 05:47:56.096271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.320 [2024-11-20 05:47:56.096280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.320 [2024-11-20 05:47:56.096284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.320 [2024-11-20 05:47:56.096288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.320 [2024-11-20 05:47:56.096294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.320 [2024-11-20 05:47:56.096307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.320 [2024-11-20 05:47:56.096350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.320 [2024-11-20 05:47:56.096356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.320 [2024-11-20 05:47:56.096360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.320 [2024-11-20 05:47:56.096364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.320 [2024-11-20 05:47:56.096372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.320 [2024-11-20 05:47:56.096376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.320 [2024-11-20 05:47:56.096380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.320 [2024-11-20 05:47:56.096386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.320 [2024-11-20 05:47:56.096399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.320 [2024-11-20 05:47:56.096442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.320 [2024-11-20 05:47:56.099494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.320 [2024-11-20 05:47:56.099500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.320 [2024-11-20 05:47:56.099504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.320 [2024-11-20 05:47:56.099516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.320 [2024-11-20 05:47:56.099520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.320 [2024-11-20 05:47:56.099524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc82d90) 00:36:36.320 [2024-11-20 05:47:56.099532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.320 [2024-11-20 05:47:56.099552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcc3a80, cid 3, qid 0 00:36:36.320 [2024-11-20 05:47:56.099603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.320 [2024-11-20 05:47:56.099609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.320 [2024-11-20 05:47:56.099613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.320 [2024-11-20 05:47:56.099618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcc3a80) on tqpair=0xc82d90 00:36:36.320 [2024-11-20 05:47:56.099626] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:36:36.320 00:36:36.320 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:36:36.320 [2024-11-20 05:47:56.140732] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:36.320 [2024-11-20 05:47:56.140784] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87522 ] 00:36:36.581 [2024-11-20 05:47:56.296035] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:36:36.581 [2024-11-20 05:47:56.296098] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:36:36.581 [2024-11-20 05:47:56.296107] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:36:36.581 [2024-11-20 05:47:56.296124] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:36:36.581 [2024-11-20 05:47:56.296136] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:36:36.581 [2024-11-20 05:47:56.296437] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:36:36.581 [2024-11-20 05:47:56.296516] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x195dd90 0 00:36:36.581 [2024-11-20 05:47:56.310463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:36:36.581 [2024-11-20 05:47:56.310485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:36:36.581 [2024-11-20 05:47:56.310491] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:36:36.581 [2024-11-20 05:47:56.310495] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:36:36.581 [2024-11-20 05:47:56.310522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.581 [2024-11-20 05:47:56.310527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.581 [2024-11-20 05:47:56.310532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195dd90) 00:36:36.581 [2024-11-20 05:47:56.310543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:36:36.581 [2024-11-20 05:47:56.310568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199e600, cid 0, qid 0 00:36:36.581 [2024-11-20 05:47:56.318472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.581 [2024-11-20 05:47:56.318486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.581 [2024-11-20 05:47:56.318490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.581 [2024-11-20 05:47:56.318495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199e600) on tqpair=0x195dd90 00:36:36.581 [2024-11-20 05:47:56.318508] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:36:36.581 [2024-11-20 05:47:56.318516] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:36:36.581 [2024-11-20 05:47:56.318522] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:36:36.581 [2024-11-20 05:47:56.318554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.581 [2024-11-20 05:47:56.318559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.581 [2024-11-20 05:47:56.318563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195dd90) 00:36:36.581 [2024-11-20 05:47:56.318572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.581 [2024-11-20 05:47:56.318597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199e600, cid 0, qid 0 00:36:36.581 [2024-11-20 05:47:56.318666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.581 [2024-11-20 05:47:56.318672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.581 [2024-11-20 05:47:56.318676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.581 [2024-11-20 05:47:56.318680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199e600) on tqpair=0x195dd90 00:36:36.581 [2024-11-20 05:47:56.318685] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:36:36.581 [2024-11-20 05:47:56.318693] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:36:36.581 [2024-11-20 05:47:56.318700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.581 [2024-11-20 05:47:56.318704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.581 [2024-11-20 05:47:56.318708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195dd90) 00:36:36.581 [2024-11-20 05:47:56.318714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.582 [2024-11-20 05:47:56.318729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199e600, cid 0, qid 0 00:36:36.582 [2024-11-20 05:47:56.318768] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.582 [2024-11-20 05:47:56.318774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.582 [2024-11-20 05:47:56.318778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.318782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199e600) on tqpair=0x195dd90 00:36:36.582 [2024-11-20 05:47:56.318788] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:36:36.582 [2024-11-20 05:47:56.318796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:36:36.582 [2024-11-20 05:47:56.318802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.318806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.318810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195dd90) 00:36:36.582 [2024-11-20 05:47:56.318816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.582 [2024-11-20 05:47:56.318830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199e600, cid 0, qid 0 00:36:36.582 [2024-11-20 05:47:56.318872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.582 [2024-11-20 05:47:56.318878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.582 [2024-11-20 05:47:56.318882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.318886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199e600) on tqpair=0x195dd90 00:36:36.582 [2024-11-20 05:47:56.318891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:36:36.582 [2024-11-20 05:47:56.318900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.318904] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.318908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195dd90) 00:36:36.582 [2024-11-20 05:47:56.318914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.582 [2024-11-20 05:47:56.318928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199e600, cid 0, qid 0 00:36:36.582 [2024-11-20 05:47:56.319048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.582 [2024-11-20 05:47:56.319055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.582 [2024-11-20 05:47:56.319059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.319063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199e600) on tqpair=0x195dd90 00:36:36.582 [2024-11-20 05:47:56.319068] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:36:36.582 [2024-11-20 05:47:56.319074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:36:36.582 [2024-11-20 05:47:56.319083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:36:36.582 [2024-11-20 05:47:56.319193] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:36:36.582 [2024-11-20 05:47:56.319199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:36:36.582 [2024-11-20 05:47:56.319208] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.319212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.319216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195dd90) 00:36:36.582 [2024-11-20 05:47:56.319223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.582 [2024-11-20 05:47:56.319239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199e600, cid 0, qid 0 00:36:36.582 [2024-11-20 05:47:56.319707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.582 [2024-11-20 05:47:56.319721] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.582 [2024-11-20 05:47:56.319726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.319731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199e600) on tqpair=0x195dd90 00:36:36.582 [2024-11-20 05:47:56.319736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:36:36.582 [2024-11-20 05:47:56.319746] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.319751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.319755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195dd90) 00:36:36.582 [2024-11-20 05:47:56.319762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.582 [2024-11-20 05:47:56.319778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199e600, cid 0, qid 0 00:36:36.582 [2024-11-20 05:47:56.319821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.582 [2024-11-20 05:47:56.319827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.582 [2024-11-20 05:47:56.319832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.319836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199e600) on tqpair=0x195dd90 00:36:36.582 [2024-11-20 05:47:56.319841] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:36:36.582 [2024-11-20 05:47:56.319847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:36:36.582 [2024-11-20 05:47:56.319855] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:36:36.582 [2024-11-20 05:47:56.319870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:36:36.582 [2024-11-20 05:47:56.319881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.319885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195dd90) 00:36:36.582 [2024-11-20 05:47:56.319892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.582 [2024-11-20 05:47:56.319907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199e600, cid 0, qid 0 00:36:36.582 [2024-11-20 05:47:56.320383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:36.582 [2024-11-20 05:47:56.320395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:36.582 [2024-11-20 05:47:56.320400] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.320404] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x195dd90): datao=0, datal=4096, cccid=0 00:36:36.582 [2024-11-20 05:47:56.320410] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x199e600) on tqpair(0x195dd90): expected_datao=0, payload_size=4096 00:36:36.582 [2024-11-20 05:47:56.320416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.320424] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.320429] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.320437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.582 [2024-11-20 05:47:56.320453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.582 [2024-11-20 05:47:56.320458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.320462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199e600) on tqpair=0x195dd90 00:36:36.582 [2024-11-20 05:47:56.320470] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:36:36.582 [2024-11-20 05:47:56.320477] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:36:36.582 [2024-11-20 05:47:56.320482] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:36:36.582 [2024-11-20 05:47:56.320487] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:36:36.582 [2024-11-20 05:47:56.320493] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:36:36.582 [2024-11-20 05:47:56.320499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:36:36.582 [2024-11-20 05:47:56.320512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:36:36.582 [2024-11-20 05:47:56.320520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.320524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.320528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195dd90) 00:36:36.582 [2024-11-20 05:47:56.320535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:36.582 [2024-11-20 05:47:56.320552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199e600, cid 0, qid 0 00:36:36.582 [2024-11-20 05:47:56.320994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.582 [2024-11-20 05:47:56.321007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.582 [2024-11-20 05:47:56.321012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.321016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199e600) on tqpair=0x195dd90 00:36:36.582 [2024-11-20 05:47:56.321024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.321028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.321033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x195dd90) 00:36:36.582 [2024-11-20 05:47:56.321039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:36.582 [2024-11-20 05:47:56.321047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.321051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.321055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x195dd90) 00:36:36.582 [2024-11-20 05:47:56.321061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:36.582 [2024-11-20 05:47:56.321068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.321072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.582 [2024-11-20 05:47:56.321076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x195dd90) 00:36:36.582 [2024-11-20 05:47:56.321082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:36.582 [2024-11-20 05:47:56.321089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.321093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.321097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195dd90) 00:36:36.583 [2024-11-20 05:47:56.321103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:36.583 [2024-11-20 05:47:56.321109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.321121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.321129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.321133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x195dd90) 00:36:36.583 [2024-11-20 05:47:56.321140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.583 [2024-11-20 05:47:56.321157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199e600, cid 0, qid 0 00:36:36.583 [2024-11-20 05:47:56.321163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199e780, cid 1, qid 0 00:36:36.583 [2024-11-20 05:47:56.321168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199e900, cid 2, qid 0 00:36:36.583 [2024-11-20 05:47:56.321173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ea80, cid 3, qid 0 00:36:36.583 [2024-11-20 05:47:56.321178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ec00, cid 4, qid 0 00:36:36.583 [2024-11-20 05:47:56.321664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.583 [2024-11-20 05:47:56.321676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.583 [2024-11-20 05:47:56.321680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.321684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ec00) on tqpair=0x195dd90 00:36:36.583 [2024-11-20 05:47:56.321689] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:36:36.583 [2024-11-20 05:47:56.321695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.321703] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.321713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.321720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.321724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.321728] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x195dd90) 00:36:36.583 [2024-11-20 05:47:56.321734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:36.583 [2024-11-20 05:47:56.321749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ec00, cid 4, qid 0 00:36:36.583 [2024-11-20 05:47:56.321798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.583 [2024-11-20 05:47:56.321804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.583 [2024-11-20 05:47:56.321808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.321812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ec00) on tqpair=0x195dd90 00:36:36.583 [2024-11-20 05:47:56.321865] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.321874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.321882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.321886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x195dd90) 00:36:36.583 [2024-11-20 05:47:56.321892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.583 [2024-11-20 05:47:56.321907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ec00, cid 4, qid 0 00:36:36.583 [2024-11-20 05:47:56.322200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:36.583 [2024-11-20 05:47:56.322212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:36.583 [2024-11-20 05:47:56.322216] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.322220] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x195dd90): datao=0, datal=4096, cccid=4 00:36:36.583 [2024-11-20 05:47:56.322225] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x199ec00) on tqpair(0x195dd90): expected_datao=0, payload_size=4096 00:36:36.583 [2024-11-20 05:47:56.322230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.322237] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.322241] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.322249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.583 [2024-11-20 05:47:56.322255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.583 [2024-11-20 05:47:56.322259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.322263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ec00) on tqpair=0x195dd90 00:36:36.583 [2024-11-20 05:47:56.322276] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:36:36.583 [2024-11-20 05:47:56.322288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.322297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.322304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.322308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x195dd90) 00:36:36.583 [2024-11-20 05:47:56.322314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.583 [2024-11-20 05:47:56.322329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ec00, cid 4, qid 0 00:36:36.583 [2024-11-20 05:47:56.326462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:36.583 [2024-11-20 05:47:56.326477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:36.583 [2024-11-20 05:47:56.326481] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.326485] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x195dd90): datao=0, datal=4096, cccid=4 00:36:36.583 [2024-11-20 05:47:56.326490] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x199ec00) on tqpair(0x195dd90): expected_datao=0, payload_size=4096 00:36:36.583 [2024-11-20 05:47:56.326495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.326502] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.326506] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.326511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.583 [2024-11-20 05:47:56.326517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.583 [2024-11-20 05:47:56.326521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.326525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ec00) on tqpair=0x195dd90 00:36:36.583 [2024-11-20 05:47:56.326540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.326550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.326559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.326563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x195dd90) 00:36:36.583 [2024-11-20 05:47:56.326570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.583 [2024-11-20 05:47:56.326589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ec00, cid 4, qid 0 00:36:36.583 [2024-11-20 05:47:56.326641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:36.583 [2024-11-20 05:47:56.326647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:36.583 [2024-11-20 05:47:56.326650] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.326654] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x195dd90): datao=0, datal=4096, cccid=4 00:36:36.583 [2024-11-20 05:47:56.326659] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x199ec00) on tqpair(0x195dd90): expected_datao=0, payload_size=4096 00:36:36.583 [2024-11-20 05:47:56.326664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.326670] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.326674] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.326682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.583 [2024-11-20 05:47:56.326687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.583 [2024-11-20 05:47:56.326691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.326695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ec00) on tqpair=0x195dd90 00:36:36.583 [2024-11-20 05:47:56.326702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.326710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.326720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.326727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.326732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.326738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.326743] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:36:36.583 [2024-11-20 05:47:56.326748] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:36:36.583 [2024-11-20 05:47:56.326754] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:36:36.583 [2024-11-20 05:47:56.326769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.583 [2024-11-20 05:47:56.326773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x195dd90) 00:36:36.584 [2024-11-20 05:47:56.326780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.584 [2024-11-20 05:47:56.326787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.326791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.326795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x195dd90) 00:36:36.584 [2024-11-20 05:47:56.326800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:36:36.584 [2024-11-20 05:47:56.326820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ec00, cid 4, qid 0 00:36:36.584 [2024-11-20 05:47:56.326825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ed80, cid 5, qid 0 00:36:36.584 [2024-11-20 05:47:56.326940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.584 [2024-11-20 05:47:56.326946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.584 [2024-11-20 05:47:56.326950] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.326954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ec00) on tqpair=0x195dd90 00:36:36.584 [2024-11-20 05:47:56.326961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.584 [2024-11-20 05:47:56.326966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.584 [2024-11-20 05:47:56.326970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.326974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ed80) on tqpair=0x195dd90 00:36:36.584 [2024-11-20 05:47:56.326983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.326987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x195dd90) 00:36:36.584 [2024-11-20 05:47:56.326993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.584 [2024-11-20 05:47:56.327007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ed80, cid 5, qid 0 00:36:36.584 [2024-11-20 05:47:56.327497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.584 [2024-11-20 05:47:56.327510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.584 [2024-11-20 05:47:56.327514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.327518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ed80) on tqpair=0x195dd90 00:36:36.584 [2024-11-20 05:47:56.327528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.327532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x195dd90) 00:36:36.584 [2024-11-20 05:47:56.327538] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.584 [2024-11-20 05:47:56.327553] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ed80, cid 5, qid 0 00:36:36.584 [2024-11-20 05:47:56.327601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.584 [2024-11-20 05:47:56.327607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.584 [2024-11-20 05:47:56.327611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.327615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ed80) on tqpair=0x195dd90 00:36:36.584 [2024-11-20 05:47:56.327624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.327628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x195dd90) 00:36:36.584 [2024-11-20 05:47:56.327634] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.584 [2024-11-20 05:47:56.327648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ed80, cid 5, qid 0 00:36:36.584 [2024-11-20 05:47:56.328032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.584 [2024-11-20 05:47:56.328044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.584 [2024-11-20 05:47:56.328048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328052] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ed80) on tqpair=0x195dd90 00:36:36.584 [2024-11-20 05:47:56.328068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x195dd90) 00:36:36.584 [2024-11-20 05:47:56.328079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.584 [2024-11-20 05:47:56.328087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x195dd90) 00:36:36.584 [2024-11-20 05:47:56.328097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.584 [2024-11-20 05:47:56.328104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x195dd90) 00:36:36.584 [2024-11-20 05:47:56.328113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.584 [2024-11-20 05:47:56.328121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x195dd90) 00:36:36.584 [2024-11-20 05:47:56.328131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.584 [2024-11-20 05:47:56.328146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ed80, cid 5, qid 0 00:36:36.584 [2024-11-20 05:47:56.328151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ec00, cid 4, qid 0 00:36:36.584 [2024-11-20 05:47:56.328156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ef00, cid 6, qid 0 00:36:36.584 [2024-11-20 05:47:56.328161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199f080, cid 7, qid 0 00:36:36.584 [2024-11-20 05:47:56.328622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:36.584 [2024-11-20 05:47:56.328634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:36.584 [2024-11-20 05:47:56.328638] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328642] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x195dd90): datao=0, datal=8192, cccid=5 00:36:36.584 [2024-11-20 05:47:56.328647] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x199ed80) on tqpair(0x195dd90): expected_datao=0, payload_size=8192 00:36:36.584 [2024-11-20 05:47:56.328652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328668] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328672] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:36.584 [2024-11-20 05:47:56.328683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:36.584 [2024-11-20 05:47:56.328687] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328691] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x195dd90): datao=0, datal=512, cccid=4 00:36:36.584 [2024-11-20 05:47:56.328695] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x199ec00) on tqpair(0x195dd90): expected_datao=0, payload_size=512 00:36:36.584 [2024-11-20 05:47:56.328700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328706] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328710] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:36.584 [2024-11-20 05:47:56.328721] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:36.584 [2024-11-20 05:47:56.328725] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328728] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x195dd90): datao=0, datal=512, cccid=6 00:36:36.584 [2024-11-20 05:47:56.328733] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x199ef00) on tqpair(0x195dd90): expected_datao=0, payload_size=512 00:36:36.584 [2024-11-20 05:47:56.328738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328744] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328747] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:36:36.584 [2024-11-20 05:47:56.328758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:36:36.584 [2024-11-20 05:47:56.328762] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328766] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x195dd90): datao=0, datal=4096, cccid=7 00:36:36.584 [2024-11-20 05:47:56.328771] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x199f080) on tqpair(0x195dd90): expected_datao=0, payload_size=4096 00:36:36.584 [2024-11-20 05:47:56.328775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328782] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328786] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.584 [2024-11-20 05:47:56.328797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.584 [2024-11-20 05:47:56.328800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.584 [2024-11-20 05:47:56.328804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ed80) on tqpair=0x195dd90 00:36:36.584 [2024-11-20 05:47:56.328818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.584 [2024-11-20 05:47:56.328824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.584 [2024-11-20 05:47:56.328828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.584 ===================================================== 00:36:36.584 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:36:36.584 ===================================================== 00:36:36.584 Controller Capabilities/Features 00:36:36.584 ================================ 00:36:36.584 Vendor ID: 8086 00:36:36.584 Subsystem Vendor ID: 8086 00:36:36.584 Serial Number: SPDK00000000000001 00:36:36.584 Model Number: SPDK bdev Controller 00:36:36.584 Firmware Version: 25.01 00:36:36.584 Recommended Arb Burst: 6 00:36:36.584 IEEE OUI Identifier: e4 d2 5c 00:36:36.584 Multi-path I/O 00:36:36.584 May have multiple subsystem ports: Yes 00:36:36.584 May have multiple controllers: Yes 00:36:36.584 Associated with SR-IOV VF: No 00:36:36.584 Max Data Transfer Size: 131072 00:36:36.585 Max Number of Namespaces: 32 00:36:36.585 Max Number of I/O Queues: 127 00:36:36.585 NVMe Specification Version (VS): 1.3 00:36:36.585 NVMe Specification Version (Identify): 1.3 00:36:36.585 Maximum Queue Entries: 128 00:36:36.585 Contiguous Queues Required: Yes 00:36:36.585 Arbitration Mechanisms Supported 00:36:36.585 Weighted Round Robin: Not Supported 00:36:36.585 Vendor Specific: Not Supported 00:36:36.585 Reset Timeout: 15000 ms 00:36:36.585 Doorbell Stride: 4 bytes 00:36:36.585 NVM Subsystem Reset: Not Supported 00:36:36.585 Command Sets Supported 00:36:36.585 NVM Command Set: Supported 00:36:36.585 Boot Partition: Not Supported 00:36:36.585 Memory Page Size Minimum: 4096 bytes 00:36:36.585 Memory Page Size Maximum: 4096 bytes 00:36:36.585 Persistent Memory Region: Not Supported 00:36:36.585 Optional Asynchronous Events Supported 00:36:36.585 Namespace Attribute Notices: Supported 00:36:36.585 Firmware Activation Notices: Not Supported 00:36:36.585 ANA Change Notices: Not Supported 00:36:36.585 PLE Aggregate Log Change Notices: Not Supported 00:36:36.585 LBA Status Info Alert Notices: Not Supported 00:36:36.585 EGE Aggregate Log Change Notices: Not Supported 00:36:36.585 Normal NVM Subsystem Shutdown event: Not Supported 00:36:36.585 Zone Descriptor Change Notices: Not Supported 00:36:36.585 Discovery Log Change Notices: Not Supported 00:36:36.585 Controller Attributes 00:36:36.585 128-bit Host Identifier: Supported 00:36:36.585 Non-Operational Permissive Mode: Not Supported 00:36:36.585 NVM Sets: Not Supported 00:36:36.585 Read Recovery Levels: Not Supported 00:36:36.585 Endurance Groups: Not Supported 00:36:36.585 Predictable Latency Mode: Not Supported 00:36:36.585 Traffic Based Keep ALive: Not Supported 00:36:36.585 Namespace Granularity: Not Supported 00:36:36.585 SQ Associations: Not Supported 00:36:36.585 UUID List: Not Supported 00:36:36.585 Multi-Domain Subsystem: Not Supported 00:36:36.585 Fixed Capacity Management: Not Supported 00:36:36.585 Variable Capacity Management: Not Supported 00:36:36.585 Delete Endurance Group: Not Supported 00:36:36.585 Delete NVM Set: Not Supported 00:36:36.585 Extended LBA Formats Supported: Not Supported 00:36:36.585 Flexible Data Placement Supported: Not Supported 00:36:36.585 00:36:36.585 Controller Memory Buffer Support 00:36:36.585 ================================ 00:36:36.585 Supported: No 00:36:36.585 00:36:36.585 Persistent Memory Region Support 00:36:36.585 ================================ 00:36:36.585 Supported: No 00:36:36.585 00:36:36.585 Admin Command Set Attributes 00:36:36.585 ============================ 00:36:36.585 Security Send/Receive: Not Supported 00:36:36.585 Format NVM: Not Supported 00:36:36.585 Firmware Activate/Download: Not Supported 00:36:36.585 Namespace Management: Not Supported 00:36:36.585 Device Self-Test: Not Supported 00:36:36.585 Directives: Not Supported 00:36:36.585 NVMe-MI: Not Supported 00:36:36.585 Virtualization Management: Not Supported 00:36:36.585 Doorbell Buffer Config: Not Supported 00:36:36.585 Get LBA Status Capability: Not Supported 00:36:36.585 Command & Feature Lockdown Capability: Not Supported 00:36:36.585 Abort Command Limit: 4 00:36:36.585 Async Event Request Limit: 4 00:36:36.585 Number of Firmware Slots: N/A 00:36:36.585 Firmware Slot 1 Read-Only: N/A 00:36:36.585 Firmware Activation Without Reset: [2024-11-20 05:47:56.328832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ec00) on tqpair=0x195dd90 00:36:36.585 [2024-11-20 05:47:56.328844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.585 [2024-11-20 05:47:56.328850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.585 [2024-11-20 05:47:56.328854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.585 [2024-11-20 05:47:56.328858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ef00) on tqpair=0x195dd90 00:36:36.585 [2024-11-20 05:47:56.328865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.585 [2024-11-20 05:47:56.328870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.585 [2024-11-20 05:47:56.328874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.585 [2024-11-20 05:47:56.328878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199f080) on tqpair=0x195dd90 00:36:36.585 N/A 00:36:36.585 Multiple Update Detection Support: N/A 00:36:36.585 Firmware Update Granularity: No Information Provided 00:36:36.585 Per-Namespace SMART Log: No 00:36:36.585 Asymmetric Namespace Access Log Page: Not Supported 00:36:36.585 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:36:36.585 Command Effects Log Page: Supported 00:36:36.585 Get Log Page Extended Data: Supported 00:36:36.585 Telemetry Log Pages: Not Supported 00:36:36.585 Persistent Event Log Pages: Not Supported 00:36:36.585 Supported Log Pages Log Page: May Support 00:36:36.585 Commands Supported & Effects Log Page: Not Supported 00:36:36.585 Feature Identifiers & Effects Log Page:May Support 00:36:36.585 NVMe-MI Commands & Effects Log Page: May Support 00:36:36.585 Data Area 4 for Telemetry Log: Not Supported 00:36:36.585 Error Log Page Entries Supported: 128 00:36:36.585 Keep Alive: Supported 00:36:36.585 Keep Alive Granularity: 10000 ms 00:36:36.585 00:36:36.585 NVM Command Set Attributes 00:36:36.585 ========================== 00:36:36.585 Submission Queue Entry Size 00:36:36.585 Max: 64 00:36:36.585 Min: 64 00:36:36.585 Completion Queue Entry Size 00:36:36.585 Max: 16 00:36:36.585 Min: 16 00:36:36.585 Number of Namespaces: 32 00:36:36.585 Compare Command: Supported 00:36:36.585 Write Uncorrectable Command: Not Supported 00:36:36.585 Dataset Management Command: Supported 00:36:36.585 Write Zeroes Command: Supported 00:36:36.585 Set Features Save Field: Not Supported 00:36:36.585 Reservations: Supported 00:36:36.585 Timestamp: Not Supported 00:36:36.585 Copy: Supported 00:36:36.585 Volatile Write Cache: Present 00:36:36.585 Atomic Write Unit (Normal): 1 00:36:36.585 Atomic Write Unit (PFail): 1 00:36:36.585 Atomic Compare & Write Unit: 1 00:36:36.585 Fused Compare & Write: Supported 00:36:36.585 Scatter-Gather List 00:36:36.585 SGL Command Set: Supported 00:36:36.585 SGL Keyed: Supported 00:36:36.585 SGL Bit Bucket Descriptor: Not Supported 00:36:36.585 SGL Metadata Pointer: Not Supported 00:36:36.585 Oversized SGL: Not Supported 00:36:36.585 SGL Metadata Address: Not Supported 00:36:36.585 SGL Offset: Supported 00:36:36.585 Transport SGL Data Block: Not Supported 00:36:36.585 Replay Protected Memory Block: Not Supported 00:36:36.585 00:36:36.585 Firmware Slot Information 00:36:36.585 ========================= 00:36:36.585 Active slot: 1 00:36:36.585 Slot 1 Firmware Revision: 25.01 00:36:36.585 00:36:36.585 00:36:36.585 Commands Supported and Effects 00:36:36.585 ============================== 00:36:36.585 Admin Commands 00:36:36.585 -------------- 00:36:36.585 Get Log Page (02h): Supported 00:36:36.585 Identify (06h): Supported 00:36:36.585 Abort (08h): Supported 00:36:36.585 Set Features (09h): Supported 00:36:36.585 Get Features (0Ah): Supported 00:36:36.585 Asynchronous Event Request (0Ch): Supported 00:36:36.585 Keep Alive (18h): Supported 00:36:36.585 I/O Commands 00:36:36.585 ------------ 00:36:36.585 Flush (00h): Supported LBA-Change 00:36:36.585 Write (01h): Supported LBA-Change 00:36:36.585 Read (02h): Supported 00:36:36.585 Compare (05h): Supported 00:36:36.585 Write Zeroes (08h): Supported LBA-Change 00:36:36.585 Dataset Management (09h): Supported LBA-Change 00:36:36.585 Copy (19h): Supported LBA-Change 00:36:36.585 00:36:36.585 Error Log 00:36:36.585 ========= 00:36:36.585 00:36:36.585 Arbitration 00:36:36.585 =========== 00:36:36.585 Arbitration Burst: 1 00:36:36.585 00:36:36.585 Power Management 00:36:36.585 ================ 00:36:36.585 Number of Power States: 1 00:36:36.585 Current Power State: Power State #0 00:36:36.585 Power State #0: 00:36:36.585 Max Power: 0.00 W 00:36:36.585 Non-Operational State: Operational 00:36:36.585 Entry Latency: Not Reported 00:36:36.585 Exit Latency: Not Reported 00:36:36.585 Relative Read Throughput: 0 00:36:36.585 Relative Read Latency: 0 00:36:36.585 Relative Write Throughput: 0 00:36:36.585 Relative Write Latency: 0 00:36:36.585 Idle Power: Not Reported 00:36:36.585 Active Power: Not Reported 00:36:36.585 Non-Operational Permissive Mode: Not Supported 00:36:36.585 00:36:36.585 Health Information 00:36:36.585 ================== 00:36:36.585 Critical Warnings: 00:36:36.585 Available Spare Space: OK 00:36:36.585 Temperature: OK 00:36:36.585 Device Reliability: OK 00:36:36.585 Read Only: No 00:36:36.585 Volatile Memory Backup: OK 00:36:36.585 Current Temperature: 0 Kelvin (-273 Celsius) 00:36:36.586 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:36:36.586 Available Spare: 0% 00:36:36.586 Available Spare Threshold: 0% 00:36:36.586 Life Percentage Used:[2024-11-20 05:47:56.328966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.328971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x195dd90) 00:36:36.586 [2024-11-20 05:47:56.328977] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.586 [2024-11-20 05:47:56.328994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199f080, cid 7, qid 0 00:36:36.586 [2024-11-20 05:47:56.329170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.586 [2024-11-20 05:47:56.329176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.586 [2024-11-20 05:47:56.329180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.329184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199f080) on tqpair=0x195dd90 00:36:36.586 [2024-11-20 05:47:56.329217] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:36:36.586 [2024-11-20 05:47:56.329226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199e600) on tqpair=0x195dd90 00:36:36.586 [2024-11-20 05:47:56.329232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.586 [2024-11-20 05:47:56.329238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199e780) on tqpair=0x195dd90 00:36:36.586 [2024-11-20 05:47:56.329242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.586 [2024-11-20 05:47:56.329248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199e900) on tqpair=0x195dd90 00:36:36.586 [2024-11-20 05:47:56.329252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.586 [2024-11-20 05:47:56.329257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ea80) on tqpair=0x195dd90 00:36:36.586 [2024-11-20 05:47:56.329262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.586 [2024-11-20 05:47:56.329271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.329275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.329279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195dd90) 00:36:36.586 [2024-11-20 05:47:56.329285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.586 [2024-11-20 05:47:56.329301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ea80, cid 3, qid 0 00:36:36.586 [2024-11-20 05:47:56.329366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.586 [2024-11-20 05:47:56.329372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.586 [2024-11-20 05:47:56.329376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.329380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ea80) on tqpair=0x195dd90 00:36:36.586 [2024-11-20 05:47:56.329387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.329391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.329394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195dd90) 00:36:36.586 [2024-11-20 05:47:56.329400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.586 [2024-11-20 05:47:56.329416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ea80, cid 3, qid 0 00:36:36.586 [2024-11-20 05:47:56.329831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.586 [2024-11-20 05:47:56.329845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.586 [2024-11-20 05:47:56.329849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.329853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ea80) on tqpair=0x195dd90 00:36:36.586 [2024-11-20 05:47:56.329858] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:36:36.586 [2024-11-20 05:47:56.329868] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:36:36.586 [2024-11-20 05:47:56.329877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.329881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.329885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195dd90) 00:36:36.586 [2024-11-20 05:47:56.329891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.586 [2024-11-20 05:47:56.329906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ea80, cid 3, qid 0 00:36:36.586 [2024-11-20 05:47:56.330133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.586 [2024-11-20 05:47:56.330143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.586 [2024-11-20 05:47:56.330147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.330151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ea80) on tqpair=0x195dd90 00:36:36.586 [2024-11-20 05:47:56.330161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.330165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.330169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195dd90) 00:36:36.586 [2024-11-20 05:47:56.330175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.586 [2024-11-20 05:47:56.330189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ea80, cid 3, qid 0 00:36:36.586 [2024-11-20 05:47:56.330240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.586 [2024-11-20 05:47:56.330246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.586 [2024-11-20 05:47:56.330250] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.330254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ea80) on tqpair=0x195dd90 00:36:36.586 [2024-11-20 05:47:56.330262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.330266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.330270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x195dd90) 00:36:36.586 [2024-11-20 05:47:56.330276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.586 [2024-11-20 05:47:56.330289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x199ea80, cid 3, qid 0 00:36:36.586 [2024-11-20 05:47:56.334459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:36:36.586 [2024-11-20 05:47:56.334473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:36:36.586 [2024-11-20 05:47:56.334477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:36:36.586 [2024-11-20 05:47:56.334482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x199ea80) on tqpair=0x195dd90 00:36:36.586 [2024-11-20 05:47:56.334490] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:36:36.586 0% 00:36:36.586 Data Units Read: 0 00:36:36.586 Data Units Written: 0 00:36:36.586 Host Read Commands: 0 00:36:36.586 Host Write Commands: 0 00:36:36.586 Controller Busy Time: 0 minutes 00:36:36.586 Power Cycles: 0 00:36:36.586 Power On Hours: 0 hours 00:36:36.586 Unsafe Shutdowns: 0 00:36:36.586 Unrecoverable Media Errors: 0 00:36:36.586 Lifetime Error Log Entries: 0 00:36:36.586 Warning Temperature Time: 0 minutes 00:36:36.586 Critical Temperature Time: 0 minutes 00:36:36.586 00:36:36.586 Number of Queues 00:36:36.586 ================ 00:36:36.586 Number of I/O Submission Queues: 127 00:36:36.586 Number of I/O Completion Queues: 127 00:36:36.586 00:36:36.586 Active Namespaces 00:36:36.586 ================= 00:36:36.586 Namespace ID:1 00:36:36.586 Error Recovery Timeout: Unlimited 00:36:36.586 Command Set Identifier: NVM (00h) 00:36:36.586 Deallocate: Supported 00:36:36.586 Deallocated/Unwritten Error: Not Supported 00:36:36.586 Deallocated Read Value: Unknown 00:36:36.586 Deallocate in Write Zeroes: Not Supported 00:36:36.586 Deallocated Guard Field: 0xFFFF 00:36:36.586 Flush: Supported 00:36:36.586 Reservation: Supported 00:36:36.586 Namespace Sharing Capabilities: Multiple Controllers 00:36:36.586 Size (in LBAs): 131072 (0GiB) 00:36:36.586 Capacity (in LBAs): 131072 (0GiB) 00:36:36.586 Utilization (in LBAs): 131072 (0GiB) 00:36:36.586 NGUID: ABCDEF0123456789ABCDEF0123456789 00:36:36.586 EUI64: ABCDEF0123456789 00:36:36.586 UUID: 166c7c94-c028-49d1-9566-551ec47b7e52 00:36:36.586 Thin Provisioning: Not Supported 00:36:36.586 Per-NS Atomic Units: Yes 00:36:36.586 Atomic Boundary Size (Normal): 0 00:36:36.586 Atomic Boundary Size (PFail): 0 00:36:36.586 Atomic Boundary Offset: 0 00:36:36.586 Maximum Single Source Range Length: 65535 00:36:36.587 Maximum Copy Length: 65535 00:36:36.587 Maximum Source Range Count: 1 00:36:36.587 NGUID/EUI64 Never Reused: No 00:36:36.587 Namespace Write Protected: No 00:36:36.587 Number of LBA Formats: 1 00:36:36.587 Current LBA Format: LBA Format #00 00:36:36.587 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:36.587 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:36.587 rmmod nvme_tcp 00:36:36.587 rmmod nvme_fabrics 00:36:36.587 rmmod nvme_keyring 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 87466 ']' 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 87466 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 87466 ']' 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 87466 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:36.587 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87466 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:36.846 killing process with pid 87466 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87466' 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 87466 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 87466 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:36.846 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:36:37.105 ************************************ 00:36:37.105 END TEST nvmf_identify 00:36:37.105 ************************************ 00:36:37.105 00:36:37.105 real 0m3.077s 00:36:37.105 user 0m7.612s 00:36:37.105 sys 0m0.912s 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:37.105 05:47:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.364 ************************************ 00:36:37.364 START TEST nvmf_perf 00:36:37.364 ************************************ 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:36:37.364 * Looking for test storage... 00:36:37.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:37.364 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:37.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.365 --rc genhtml_branch_coverage=1 00:36:37.365 --rc genhtml_function_coverage=1 00:36:37.365 --rc genhtml_legend=1 00:36:37.365 --rc geninfo_all_blocks=1 00:36:37.365 --rc geninfo_unexecuted_blocks=1 00:36:37.365 00:36:37.365 ' 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:37.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.365 --rc genhtml_branch_coverage=1 00:36:37.365 --rc genhtml_function_coverage=1 00:36:37.365 --rc genhtml_legend=1 00:36:37.365 --rc geninfo_all_blocks=1 00:36:37.365 --rc geninfo_unexecuted_blocks=1 00:36:37.365 00:36:37.365 ' 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:37.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.365 --rc genhtml_branch_coverage=1 00:36:37.365 --rc genhtml_function_coverage=1 00:36:37.365 --rc genhtml_legend=1 00:36:37.365 --rc geninfo_all_blocks=1 00:36:37.365 --rc geninfo_unexecuted_blocks=1 00:36:37.365 00:36:37.365 ' 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:37.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.365 --rc genhtml_branch_coverage=1 00:36:37.365 --rc genhtml_function_coverage=1 00:36:37.365 --rc genhtml_legend=1 00:36:37.365 --rc geninfo_all_blocks=1 00:36:37.365 --rc geninfo_unexecuted_blocks=1 00:36:37.365 00:36:37.365 ' 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:37.365 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:37.365 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:37.625 Cannot find device "nvmf_init_br" 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:37.625 Cannot find device "nvmf_init_br2" 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:37.625 Cannot find device "nvmf_tgt_br" 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:37.625 Cannot find device "nvmf_tgt_br2" 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:37.625 Cannot find device "nvmf_init_br" 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:37.625 Cannot find device "nvmf_init_br2" 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:37.625 Cannot find device "nvmf_tgt_br" 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:37.625 Cannot find device "nvmf_tgt_br2" 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:37.625 Cannot find device "nvmf_br" 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:37.625 Cannot find device "nvmf_init_if" 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:37.625 Cannot find device "nvmf_init_if2" 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:37.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:37.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:37.625 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:37.884 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:37.884 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:37.884 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:37.884 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:37.884 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:37.884 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:37.884 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:37.884 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:37.884 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:37.884 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:37.884 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:37.884 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:37.884 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:37.884 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:37.884 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:37.885 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:37.885 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:36:37.885 00:36:37.885 --- 10.0.0.3 ping statistics --- 00:36:37.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:37.885 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:37.885 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:37.885 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:36:37.885 00:36:37.885 --- 10.0.0.4 ping statistics --- 00:36:37.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:37.885 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:37.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:37.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:36:37.885 00:36:37.885 --- 10.0.0.1 ping statistics --- 00:36:37.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:37.885 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:37.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:37.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:36:37.885 00:36:37.885 --- 10.0.0.2 ping statistics --- 00:36:37.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:37.885 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=87746 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 87746 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 87746 ']' 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:37.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:37.885 05:47:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:37.885 [2024-11-20 05:47:57.788720] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:37.885 [2024-11-20 05:47:57.788816] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:38.143 [2024-11-20 05:47:57.938294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:38.143 [2024-11-20 05:47:57.982741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:38.143 [2024-11-20 05:47:57.982791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:38.143 [2024-11-20 05:47:57.982801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:38.143 [2024-11-20 05:47:57.982809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:38.143 [2024-11-20 05:47:57.982832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:38.143 [2024-11-20 05:47:57.983714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:38.143 [2024-11-20 05:47:57.983809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:38.143 [2024-11-20 05:47:57.983891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:38.143 [2024-11-20 05:47:57.983893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:39.078 05:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:39.078 05:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:36:39.078 05:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:39.079 05:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:39.079 05:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:39.079 05:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:39.079 05:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:36:39.079 05:47:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:36:39.337 05:47:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:36:39.337 05:47:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:36:39.595 05:47:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:36:39.595 05:47:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:40.162 05:47:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:36:40.162 05:47:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:36:40.162 05:47:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:36:40.162 05:47:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:36:40.162 05:47:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:36:40.420 [2024-11-20 05:48:00.080245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:40.420 05:48:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:40.420 05:48:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:36:40.420 05:48:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:40.678 05:48:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:36:40.678 05:48:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:40.937 05:48:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:41.195 [2024-11-20 05:48:00.901266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:41.195 05:48:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:36:41.453 05:48:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:36:41.453 05:48:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:36:41.453 05:48:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:36:41.453 05:48:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:36:42.389 Initializing NVMe Controllers 00:36:42.389 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:36:42.389 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:36:42.389 Initialization complete. Launching workers. 00:36:42.389 ======================================================== 00:36:42.389 Latency(us) 00:36:42.389 Device Information : IOPS MiB/s Average min max 00:36:42.389 PCIE (0000:00:10.0) NSID 1 from core 0: 23939.10 93.51 1337.19 361.41 8591.88 00:36:42.389 ======================================================== 00:36:42.389 Total : 23939.10 93.51 1337.19 361.41 8591.88 00:36:42.389 00:36:42.389 05:48:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:36:43.762 Initializing NVMe Controllers 00:36:43.762 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:36:43.762 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:43.762 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:43.762 Initialization complete. Launching workers. 00:36:43.762 ======================================================== 00:36:43.762 Latency(us) 00:36:43.762 Device Information : IOPS MiB/s Average min max 00:36:43.762 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4810.93 18.79 207.63 82.49 5098.08 00:36:43.762 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8047.48 5499.20 12001.44 00:36:43.762 ======================================================== 00:36:43.762 Total : 4935.93 19.28 406.16 82.49 12001.44 00:36:43.762 00:36:44.020 05:48:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:36:45.397 Initializing NVMe Controllers 00:36:45.397 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:36:45.397 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:45.397 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:45.397 Initialization complete. Launching workers. 00:36:45.397 ======================================================== 00:36:45.397 Latency(us) 00:36:45.397 Device Information : IOPS MiB/s Average min max 00:36:45.397 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11137.88 43.51 2873.50 495.84 6375.83 00:36:45.397 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2716.92 10.61 11811.31 6585.37 19247.00 00:36:45.397 ======================================================== 00:36:45.397 Total : 13854.80 54.12 4626.20 495.84 19247.00 00:36:45.397 00:36:45.397 05:48:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:36:45.397 05:48:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:36:47.929 Initializing NVMe Controllers 00:36:47.929 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:36:47.929 Controller IO queue size 128, less than required. 00:36:47.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:47.929 Controller IO queue size 128, less than required. 00:36:47.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:47.929 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:47.929 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:47.929 Initialization complete. Launching workers. 00:36:47.929 ======================================================== 00:36:47.929 Latency(us) 00:36:47.929 Device Information : IOPS MiB/s Average min max 00:36:47.929 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1997.14 499.28 64677.06 42767.59 105521.30 00:36:47.929 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 577.96 144.49 232432.66 77464.32 362627.82 00:36:47.929 ======================================================== 00:36:47.929 Total : 2575.10 643.78 102328.51 42767.59 362627.82 00:36:47.929 00:36:47.929 05:48:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:36:48.187 Initializing NVMe Controllers 00:36:48.187 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:36:48.187 Controller IO queue size 128, less than required. 00:36:48.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:48.187 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:36:48.187 Controller IO queue size 128, less than required. 00:36:48.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:48.187 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:36:48.187 WARNING: Some requested NVMe devices were skipped 00:36:48.187 No valid NVMe controllers or AIO or URING devices found 00:36:48.187 05:48:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:36:50.718 Initializing NVMe Controllers 00:36:50.718 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:36:50.718 Controller IO queue size 128, less than required. 00:36:50.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:50.718 Controller IO queue size 128, less than required. 00:36:50.719 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:50.719 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:50.719 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:50.719 Initialization complete. Launching workers. 00:36:50.719 00:36:50.719 ==================== 00:36:50.719 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:36:50.719 TCP transport: 00:36:50.719 polls: 8059 00:36:50.719 idle_polls: 3915 00:36:50.719 sock_completions: 4144 00:36:50.719 nvme_completions: 4805 00:36:50.719 submitted_requests: 7198 00:36:50.719 queued_requests: 1 00:36:50.719 00:36:50.719 ==================== 00:36:50.719 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:36:50.719 TCP transport: 00:36:50.719 polls: 8133 00:36:50.719 idle_polls: 4294 00:36:50.719 sock_completions: 3839 00:36:50.719 nvme_completions: 7319 00:36:50.719 submitted_requests: 10922 00:36:50.719 queued_requests: 1 00:36:50.719 ======================================================== 00:36:50.719 Latency(us) 00:36:50.719 Device Information : IOPS MiB/s Average min max 00:36:50.719 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1200.91 300.23 109312.23 66734.31 173587.30 00:36:50.719 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1829.36 457.34 70671.71 33971.45 114844.10 00:36:50.719 ======================================================== 00:36:50.719 Total : 3030.27 757.57 85985.11 33971.45 173587.30 00:36:50.719 00:36:50.719 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:36:50.719 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:50.977 rmmod nvme_tcp 00:36:50.977 rmmod nvme_fabrics 00:36:50.977 rmmod nvme_keyring 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 87746 ']' 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 87746 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 87746 ']' 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 87746 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:50.977 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87746 00:36:51.235 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:51.235 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:51.235 killing process with pid 87746 00:36:51.235 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87746' 00:36:51.235 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 87746 00:36:51.235 05:48:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 87746 00:36:51.800 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:51.800 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:51.800 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:51.801 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:36:51.801 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:36:51.801 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:51.801 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:36:51.801 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:51.801 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:51.801 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:51.801 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:51.801 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:51.801 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:51.801 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:51.801 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:51.801 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:51.801 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:51.801 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:52.059 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:52.059 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:52.059 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:52.059 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:52.059 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:52.059 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:52.059 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:52.059 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:52.059 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:36:52.059 00:36:52.059 real 0m14.836s 00:36:52.059 user 0m52.798s 00:36:52.059 sys 0m4.057s 00:36:52.059 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:52.059 05:48:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:36:52.059 ************************************ 00:36:52.059 END TEST nvmf_perf 00:36:52.059 ************************************ 00:36:52.059 05:48:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:36:52.059 05:48:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:52.059 05:48:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:52.059 05:48:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.059 ************************************ 00:36:52.059 START TEST nvmf_fio_host 00:36:52.059 ************************************ 00:36:52.060 05:48:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:36:52.319 * Looking for test storage... 00:36:52.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:52.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.319 --rc genhtml_branch_coverage=1 00:36:52.319 --rc genhtml_function_coverage=1 00:36:52.319 --rc genhtml_legend=1 00:36:52.319 --rc geninfo_all_blocks=1 00:36:52.319 --rc geninfo_unexecuted_blocks=1 00:36:52.319 00:36:52.319 ' 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:52.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.319 --rc genhtml_branch_coverage=1 00:36:52.319 --rc genhtml_function_coverage=1 00:36:52.319 --rc genhtml_legend=1 00:36:52.319 --rc geninfo_all_blocks=1 00:36:52.319 --rc geninfo_unexecuted_blocks=1 00:36:52.319 00:36:52.319 ' 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:52.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.319 --rc genhtml_branch_coverage=1 00:36:52.319 --rc genhtml_function_coverage=1 00:36:52.319 --rc genhtml_legend=1 00:36:52.319 --rc geninfo_all_blocks=1 00:36:52.319 --rc geninfo_unexecuted_blocks=1 00:36:52.319 00:36:52.319 ' 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:52.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.319 --rc genhtml_branch_coverage=1 00:36:52.319 --rc genhtml_function_coverage=1 00:36:52.319 --rc genhtml_legend=1 00:36:52.319 --rc geninfo_all_blocks=1 00:36:52.319 --rc geninfo_unexecuted_blocks=1 00:36:52.319 00:36:52.319 ' 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:52.319 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:52.320 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:52.320 Cannot find device "nvmf_init_br" 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:52.320 Cannot find device "nvmf_init_br2" 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:36:52.320 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:52.579 Cannot find device "nvmf_tgt_br" 00:36:52.579 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:36:52.579 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:52.579 Cannot find device "nvmf_tgt_br2" 00:36:52.579 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:36:52.579 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:52.579 Cannot find device "nvmf_init_br" 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:52.580 Cannot find device "nvmf_init_br2" 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:52.580 Cannot find device "nvmf_tgt_br" 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:52.580 Cannot find device "nvmf_tgt_br2" 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:52.580 Cannot find device "nvmf_br" 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:52.580 Cannot find device "nvmf_init_if" 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:52.580 Cannot find device "nvmf_init_if2" 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:52.580 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:52.580 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:52.580 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:52.839 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:52.839 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:36:52.839 00:36:52.839 --- 10.0.0.3 ping statistics --- 00:36:52.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:52.839 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:52.839 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:52.839 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:36:52.839 00:36:52.839 --- 10.0.0.4 ping statistics --- 00:36:52.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:52.839 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:52.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:52.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:36:52.839 00:36:52.839 --- 10.0.0.1 ping statistics --- 00:36:52.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:52.839 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:52.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:52.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:36:52.839 00:36:52.839 --- 10.0.0.2 ping statistics --- 00:36:52.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:52.839 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=88275 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 88275 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 88275 ']' 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:52.839 05:48:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.839 [2024-11-20 05:48:12.705819] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:52.839 [2024-11-20 05:48:12.705928] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:53.098 [2024-11-20 05:48:12.859204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:53.098 [2024-11-20 05:48:12.914311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:53.098 [2024-11-20 05:48:12.914375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:53.098 [2024-11-20 05:48:12.914386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:53.098 [2024-11-20 05:48:12.914397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:53.098 [2024-11-20 05:48:12.914406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:53.098 [2024-11-20 05:48:12.915474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:53.098 [2024-11-20 05:48:12.915541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:53.098 [2024-11-20 05:48:12.915627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:53.098 [2024-11-20 05:48:12.915626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:53.357 05:48:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:53.357 05:48:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:36:53.357 05:48:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:53.615 [2024-11-20 05:48:13.285363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:53.615 05:48:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:36:53.615 05:48:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:53.615 05:48:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.615 05:48:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:36:53.874 Malloc1 00:36:53.874 05:48:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:54.132 05:48:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:54.390 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:54.649 [2024-11-20 05:48:14.431813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:54.649 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:36:54.907 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:36:54.907 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:36:54.907 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:36:54.907 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:54.907 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:54.907 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:36:54.908 05:48:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:36:55.166 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:36:55.166 fio-3.35 00:36:55.166 Starting 1 thread 00:36:57.696 00:36:57.696 test: (groupid=0, jobs=1): err= 0: pid=88393: Wed Nov 20 05:48:17 2024 00:36:57.696 read: IOPS=9824, BW=38.4MiB/s (40.2MB/s)(77.0MiB/2006msec) 00:36:57.696 slat (nsec): min=1592, max=240373, avg=2248.53, stdev=2332.86 00:36:57.696 clat (usec): min=2314, max=12447, avg=6813.21, stdev=602.94 00:36:57.696 lat (usec): min=2346, max=12450, avg=6815.46, stdev=602.88 00:36:57.696 clat percentiles (usec): 00:36:57.696 | 1.00th=[ 5473], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6390], 00:36:57.696 | 30.00th=[ 6521], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6915], 00:36:57.696 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7439], 95.00th=[ 7635], 00:36:57.696 | 99.00th=[ 8455], 99.50th=[ 9110], 99.90th=[11338], 99.95th=[12125], 00:36:57.697 | 99.99th=[12387] 00:36:57.697 bw ( KiB/s): min=37936, max=41688, per=99.94%, avg=39272.00, stdev=1666.55, samples=4 00:36:57.697 iops : min= 9484, max=10422, avg=9818.00, stdev=416.64, samples=4 00:36:57.697 write: IOPS=9835, BW=38.4MiB/s (40.3MB/s)(77.1MiB/2006msec); 0 zone resets 00:36:57.697 slat (nsec): min=1631, max=168475, avg=2307.99, stdev=1531.73 00:36:57.697 clat (usec): min=1671, max=11036, avg=6160.28, stdev=522.50 00:36:57.697 lat (usec): min=1680, max=11038, avg=6162.59, stdev=522.46 00:36:57.697 clat percentiles (usec): 00:36:57.697 | 1.00th=[ 4883], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5800], 00:36:57.697 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6259], 00:36:57.697 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:36:57.697 | 99.00th=[ 7373], 99.50th=[ 8356], 99.90th=[ 8979], 99.95th=[ 9503], 00:36:57.697 | 99.99th=[10290] 00:36:57.697 bw ( KiB/s): min=38208, max=42304, per=100.00%, avg=39350.00, stdev=1972.48, samples=4 00:36:57.697 iops : min= 9552, max=10576, avg=9837.50, stdev=493.12, samples=4 00:36:57.697 lat (msec) : 2=0.03%, 4=0.15%, 10=99.70%, 20=0.12% 00:36:57.697 cpu : usr=67.23%, sys=24.99%, ctx=32, majf=0, minf=7 00:36:57.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:36:57.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:57.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:57.697 issued rwts: total=19707,19731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:57.697 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:57.697 00:36:57.697 Run status group 0 (all jobs): 00:36:57.697 READ: bw=38.4MiB/s (40.2MB/s), 38.4MiB/s-38.4MiB/s (40.2MB/s-40.2MB/s), io=77.0MiB (80.7MB), run=2006-2006msec 00:36:57.697 WRITE: bw=38.4MiB/s (40.3MB/s), 38.4MiB/s-38.4MiB/s (40.3MB/s-40.3MB/s), io=77.1MiB (80.8MB), run=2006-2006msec 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:36:57.697 05:48:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:36:57.697 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:36:57.697 fio-3.35 00:36:57.697 Starting 1 thread 00:37:00.228 00:37:00.228 test: (groupid=0, jobs=1): err= 0: pid=88440: Wed Nov 20 05:48:19 2024 00:37:00.228 read: IOPS=8342, BW=130MiB/s (137MB/s)(261MiB/2005msec) 00:37:00.228 slat (nsec): min=2488, max=92808, avg=3298.81, stdev=2523.66 00:37:00.228 clat (usec): min=2424, max=33961, avg=8964.78, stdev=3256.64 00:37:00.228 lat (usec): min=2427, max=33980, avg=8968.07, stdev=3257.89 00:37:00.228 clat percentiles (usec): 00:37:00.228 | 1.00th=[ 4228], 5.00th=[ 5211], 10.00th=[ 5800], 20.00th=[ 6652], 00:37:00.228 | 30.00th=[ 7308], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9110], 00:37:00.228 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[12125], 95.00th=[13829], 00:37:00.228 | 99.00th=[22414], 99.50th=[29754], 99.90th=[33424], 99.95th=[33817], 00:37:00.228 | 99.99th=[33817] 00:37:00.228 bw ( KiB/s): min=62656, max=75680, per=50.61%, avg=67552.00, stdev=6023.88, samples=4 00:37:00.228 iops : min= 3916, max= 4730, avg=4222.00, stdev=376.49, samples=4 00:37:00.228 write: IOPS=4725, BW=73.8MiB/s (77.4MB/s)(138MiB/1867msec); 0 zone resets 00:37:00.228 slat (usec): min=28, max=252, avg=34.88, stdev= 9.66 00:37:00.228 clat (usec): min=4495, max=38637, avg=11387.15, stdev=3615.61 00:37:00.228 lat (usec): min=4524, max=38696, avg=11422.03, stdev=3618.95 00:37:00.228 clat percentiles (usec): 00:37:00.228 | 1.00th=[ 6718], 5.00th=[ 7570], 10.00th=[ 8029], 20.00th=[ 8717], 00:37:00.228 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11338], 00:37:00.228 | 70.00th=[12387], 80.00th=[13698], 90.00th=[15401], 95.00th=[17171], 00:37:00.228 | 99.00th=[26346], 99.50th=[30540], 99.90th=[34866], 99.95th=[34866], 00:37:00.228 | 99.99th=[38536] 00:37:00.228 bw ( KiB/s): min=63200, max=78112, per=92.70%, avg=70088.00, stdev=6465.76, samples=4 00:37:00.228 iops : min= 3950, max= 4882, avg=4380.50, stdev=404.11, samples=4 00:37:00.228 lat (msec) : 4=0.50%, 10=61.97%, 20=35.85%, 50=1.68% 00:37:00.228 cpu : usr=71.42%, sys=19.55%, ctx=4, majf=0, minf=14 00:37:00.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:37:00.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:00.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:00.228 issued rwts: total=16727,8822,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:00.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:00.228 00:37:00.228 Run status group 0 (all jobs): 00:37:00.228 READ: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=261MiB (274MB), run=2005-2005msec 00:37:00.228 WRITE: bw=73.8MiB/s (77.4MB/s), 73.8MiB/s-73.8MiB/s (77.4MB/s-77.4MB/s), io=138MiB (145MB), run=1867-1867msec 00:37:00.228 05:48:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:00.228 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:37:00.228 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:00.228 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:37:00.228 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:37:00.228 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:00.228 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:37:00.228 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:00.228 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:37:00.228 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:00.228 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:00.228 rmmod nvme_tcp 00:37:00.228 rmmod nvme_fabrics 00:37:00.229 rmmod nvme_keyring 00:37:00.229 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:00.229 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:37:00.229 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:37:00.229 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 88275 ']' 00:37:00.229 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 88275 00:37:00.229 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 88275 ']' 00:37:00.229 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 88275 00:37:00.229 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:37:00.229 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:00.229 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88275 00:37:00.487 killing process with pid 88275 00:37:00.487 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:00.487 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:00.487 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88275' 00:37:00.487 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 88275 00:37:00.488 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 88275 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:37:00.747 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:37:01.005 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:01.006 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:01.006 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:37:01.006 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.006 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:01.006 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.006 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:37:01.006 00:37:01.006 real 0m8.834s 00:37:01.006 user 0m34.150s 00:37:01.006 sys 0m2.677s 00:37:01.006 ************************************ 00:37:01.006 END TEST nvmf_fio_host 00:37:01.006 ************************************ 00:37:01.006 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:01.006 05:48:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.006 05:48:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:37:01.006 05:48:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:01.006 05:48:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:01.006 05:48:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.006 ************************************ 00:37:01.006 START TEST nvmf_failover 00:37:01.006 ************************************ 00:37:01.006 05:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:37:01.264 * Looking for test storage... 00:37:01.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:37:01.264 05:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:01.264 05:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:37:01.264 05:48:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:01.264 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:01.264 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:01.264 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:01.264 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:01.264 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:37:01.264 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:37:01.264 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:37:01.264 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:37:01.264 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:37:01.264 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:37:01.264 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:37:01.264 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:01.264 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:37:01.264 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:37:01.264 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:01.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.265 --rc genhtml_branch_coverage=1 00:37:01.265 --rc genhtml_function_coverage=1 00:37:01.265 --rc genhtml_legend=1 00:37:01.265 --rc geninfo_all_blocks=1 00:37:01.265 --rc geninfo_unexecuted_blocks=1 00:37:01.265 00:37:01.265 ' 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:01.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.265 --rc genhtml_branch_coverage=1 00:37:01.265 --rc genhtml_function_coverage=1 00:37:01.265 --rc genhtml_legend=1 00:37:01.265 --rc geninfo_all_blocks=1 00:37:01.265 --rc geninfo_unexecuted_blocks=1 00:37:01.265 00:37:01.265 ' 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:01.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.265 --rc genhtml_branch_coverage=1 00:37:01.265 --rc genhtml_function_coverage=1 00:37:01.265 --rc genhtml_legend=1 00:37:01.265 --rc geninfo_all_blocks=1 00:37:01.265 --rc geninfo_unexecuted_blocks=1 00:37:01.265 00:37:01.265 ' 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:01.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.265 --rc genhtml_branch_coverage=1 00:37:01.265 --rc genhtml_function_coverage=1 00:37:01.265 --rc genhtml_legend=1 00:37:01.265 --rc geninfo_all_blocks=1 00:37:01.265 --rc geninfo_unexecuted_blocks=1 00:37:01.265 00:37:01.265 ' 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:01.265 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.265 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:37:01.266 Cannot find device "nvmf_init_br" 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:37:01.266 Cannot find device "nvmf_init_br2" 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:37:01.266 Cannot find device "nvmf_tgt_br" 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:37:01.266 Cannot find device "nvmf_tgt_br2" 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:37:01.266 Cannot find device "nvmf_init_br" 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:37:01.266 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:37:01.524 Cannot find device "nvmf_init_br2" 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:37:01.524 Cannot find device "nvmf_tgt_br" 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:37:01.524 Cannot find device "nvmf_tgt_br2" 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:37:01.524 Cannot find device "nvmf_br" 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:37:01.524 Cannot find device "nvmf_init_if" 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:37:01.524 Cannot find device "nvmf_init_if2" 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:01.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:01.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:37:01.524 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:01.525 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:01.782 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:37:01.782 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:37:01.782 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:37:01.782 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:37:01.782 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:01.782 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:01.782 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:01.782 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:37:01.782 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:37:01.782 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:37:01.782 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:01.782 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:37:01.783 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:01.783 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:37:01.783 00:37:01.783 --- 10.0.0.3 ping statistics --- 00:37:01.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.783 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:37:01.783 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:37:01.783 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:37:01.783 00:37:01.783 --- 10.0.0.4 ping statistics --- 00:37:01.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.783 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:01.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:01.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:37:01.783 00:37:01.783 --- 10.0.0.1 ping statistics --- 00:37:01.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.783 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:37:01.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:01.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:37:01.783 00:37:01.783 --- 10.0.0.2 ping statistics --- 00:37:01.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.783 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=88711 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 88711 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 88711 ']' 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:01.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:01.783 05:48:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:01.783 [2024-11-20 05:48:21.645294] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:01.783 [2024-11-20 05:48:21.645413] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:02.041 [2024-11-20 05:48:21.800026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:02.041 [2024-11-20 05:48:21.876963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:02.041 [2024-11-20 05:48:21.877197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:02.041 [2024-11-20 05:48:21.877303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:02.041 [2024-11-20 05:48:21.877352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:02.041 [2024-11-20 05:48:21.877378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:02.041 [2024-11-20 05:48:21.878772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:02.041 [2024-11-20 05:48:21.878948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:02.041 [2024-11-20 05:48:21.878950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:02.976 05:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:02.976 05:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:37:02.976 05:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:02.976 05:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:02.976 05:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:02.976 05:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:02.977 05:48:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:03.235 [2024-11-20 05:48:22.977202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:03.235 05:48:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:37:03.493 Malloc0 00:37:03.493 05:48:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:03.750 05:48:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:04.008 05:48:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:37:04.265 [2024-11-20 05:48:24.071458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:04.265 05:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:37:04.522 [2024-11-20 05:48:24.287693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:37:04.522 05:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:37:04.781 [2024-11-20 05:48:24.564114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:37:04.781 05:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:37:04.781 05:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=88827 00:37:04.781 05:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:04.781 05:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 88827 /var/tmp/bdevperf.sock 00:37:04.781 05:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 88827 ']' 00:37:04.781 05:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:04.781 05:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:04.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:04.781 05:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:04.781 05:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:04.781 05:48:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:05.714 05:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:05.714 05:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:37:05.714 05:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:37:05.973 NVMe0n1 00:37:05.973 05:48:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:37:06.278 00:37:06.278 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88870 00:37:06.278 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:37:06.278 05:48:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:07.675 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:37:07.675 [2024-11-20 05:48:27.412119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.412995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.413660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.414063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.675 [2024-11-20 05:48:27.414080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.676 [2024-11-20 05:48:27.414090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.676 [2024-11-20 05:48:27.414100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.676 [2024-11-20 05:48:27.414109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.676 [2024-11-20 05:48:27.414118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.676 [2024-11-20 05:48:27.414128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.676 [2024-11-20 05:48:27.414137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.676 [2024-11-20 05:48:27.414146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.676 [2024-11-20 05:48:27.414155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.676 [2024-11-20 05:48:27.414164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.676 [2024-11-20 05:48:27.414173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.676 [2024-11-20 05:48:27.414182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.676 [2024-11-20 05:48:27.414190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.676 [2024-11-20 05:48:27.414199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.676 [2024-11-20 05:48:27.414208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c830 is same with the state(6) to be set 00:37:07.676 05:48:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:37:10.961 05:48:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:37:10.961 00:37:10.961 05:48:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:37:11.220 [2024-11-20 05:48:30.997563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 [2024-11-20 05:48:30.997869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d5e0 is same with the state(6) to be set 00:37:11.220 05:48:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:37:14.570 05:48:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:37:14.570 [2024-11-20 05:48:34.286746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:14.570 05:48:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:37:15.504 05:48:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:37:15.764 [2024-11-20 05:48:35.569289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.569993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.764 [2024-11-20 05:48:35.570209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.765 [2024-11-20 05:48:35.570218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.765 [2024-11-20 05:48:35.570231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.765 [2024-11-20 05:48:35.570245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.765 [2024-11-20 05:48:35.570258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.765 [2024-11-20 05:48:35.570272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.765 [2024-11-20 05:48:35.570285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.765 [2024-11-20 05:48:35.570299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.765 [2024-11-20 05:48:35.570313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.765 [2024-11-20 05:48:35.570324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.765 [2024-11-20 05:48:35.570336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b75b0 is same with the state(6) to be set 00:37:15.765 05:48:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 88870 00:37:22.347 { 00:37:22.347 "results": [ 00:37:22.347 { 00:37:22.347 "job": "NVMe0n1", 00:37:22.347 "core_mask": "0x1", 00:37:22.347 "workload": "verify", 00:37:22.347 "status": "finished", 00:37:22.347 "verify_range": { 00:37:22.347 "start": 0, 00:37:22.347 "length": 16384 00:37:22.347 }, 00:37:22.347 "queue_depth": 128, 00:37:22.347 "io_size": 4096, 00:37:22.347 "runtime": 15.005564, 00:37:22.347 "iops": 10126.37712251269, 00:37:22.347 "mibps": 39.55616063481519, 00:37:22.347 "io_failed": 4013, 00:37:22.347 "io_timeout": 0, 00:37:22.347 "avg_latency_us": 12291.939658012405, 00:37:22.347 "min_latency_us": 485.66857142857145, 00:37:22.347 "max_latency_us": 25340.586666666666 00:37:22.347 } 00:37:22.347 ], 00:37:22.347 "core_count": 1 00:37:22.347 } 00:37:22.347 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 88827 00:37:22.347 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 88827 ']' 00:37:22.347 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 88827 00:37:22.347 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:37:22.347 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:22.347 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88827 00:37:22.347 killing process with pid 88827 00:37:22.347 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:22.347 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:22.347 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88827' 00:37:22.347 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 88827 00:37:22.347 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 88827 00:37:22.347 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:37:22.347 [2024-11-20 05:48:24.623971] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:22.347 [2024-11-20 05:48:24.624060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88827 ] 00:37:22.347 [2024-11-20 05:48:24.773117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.347 [2024-11-20 05:48:24.865932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:22.347 Running I/O for 15 seconds... 00:37:22.347 11460.00 IOPS, 44.77 MiB/s [2024-11-20T05:48:42.266Z] [2024-11-20 05:48:27.414834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.347 [2024-11-20 05:48:27.414889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.347 [2024-11-20 05:48:27.414916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.347 [2024-11-20 05:48:27.414932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.347 [2024-11-20 05:48:27.414949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.347 [2024-11-20 05:48:27.414964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.347 [2024-11-20 05:48:27.414981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.347 [2024-11-20 05:48:27.414995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.347 [2024-11-20 05:48:27.415023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.347 [2024-11-20 05:48:27.415036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.347 [2024-11-20 05:48:27.415051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.347 [2024-11-20 05:48:27.415064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.347 [2024-11-20 05:48:27.415079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.347 [2024-11-20 05:48:27.415092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.347 [2024-11-20 05:48:27.415108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.347 [2024-11-20 05:48:27.415121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.415974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.415989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.416007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.416023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.416040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.416056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.416074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.416090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.416114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.416131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.416148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.416164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.416182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.348 [2024-11-20 05:48:27.416197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.416214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.348 [2024-11-20 05:48:27.416230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.416246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.348 [2024-11-20 05:48:27.416261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.416280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.348 [2024-11-20 05:48:27.416295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.416313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.348 [2024-11-20 05:48:27.416329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.416346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.348 [2024-11-20 05:48:27.416362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.416380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.348 [2024-11-20 05:48:27.416395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.416412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.348 [2024-11-20 05:48:27.416428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.348 [2024-11-20 05:48:27.416445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.416974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.416999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.349 [2024-11-20 05:48:27.417041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.349 [2024-11-20 05:48:27.417704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.349 [2024-11-20 05:48:27.417717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.417738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.350 [2024-11-20 05:48:27.417751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.417767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.350 [2024-11-20 05:48:27.417780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.417795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.350 [2024-11-20 05:48:27.417808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.417824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.350 [2024-11-20 05:48:27.417837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.417852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.350 [2024-11-20 05:48:27.417865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.417897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.350 [2024-11-20 05:48:27.417912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.417928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.350 [2024-11-20 05:48:27.417943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.417959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.350 [2024-11-20 05:48:27.417972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.417989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.350 [2024-11-20 05:48:27.418019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.350 [2024-11-20 05:48:27.418050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.350 [2024-11-20 05:48:27.418878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.350 [2024-11-20 05:48:27.418909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.350 [2024-11-20 05:48:27.418943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.350 [2024-11-20 05:48:27.418974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.418995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.350 [2024-11-20 05:48:27.419010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.350 [2024-11-20 05:48:27.419026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:27.419053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:27.419068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:27.419083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:27.419098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702360 is same with the state(6) to be set 00:37:22.351 [2024-11-20 05:48:27.419116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.351 [2024-11-20 05:48:27.419126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.351 [2024-11-20 05:48:27.419136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103488 len:8 PRP1 0x0 PRP2 0x0 00:37:22.351 [2024-11-20 05:48:27.419150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:27.419225] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:37:22.351 [2024-11-20 05:48:27.419280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.351 [2024-11-20 05:48:27.419296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:27.419310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.351 [2024-11-20 05:48:27.419323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:27.419338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.351 [2024-11-20 05:48:27.419352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:27.419365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.351 [2024-11-20 05:48:27.419380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:27.419403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:37:22.351 [2024-11-20 05:48:27.422649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:37:22.351 [2024-11-20 05:48:27.422689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1695f30 (9): Bad file descriptor 00:37:22.351 [2024-11-20 05:48:27.448306] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:37:22.351 9974.00 IOPS, 38.96 MiB/s [2024-11-20T05:48:42.270Z] 9509.67 IOPS, 37.15 MiB/s [2024-11-20T05:48:42.270Z] 9722.00 IOPS, 37.98 MiB/s [2024-11-20T05:48:42.270Z] [2024-11-20 05:48:30.998341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.998972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.998987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.999001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.999015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.999028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.999043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.999057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.999072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.999085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.999100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.999113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.999127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.999141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.999155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.999169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.999183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.999196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.999217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.351 [2024-11-20 05:48:30.999231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.351 [2024-11-20 05:48:30.999245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.352 [2024-11-20 05:48:30.999259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.352 [2024-11-20 05:48:30.999286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.352 [2024-11-20 05:48:30.999316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.352 [2024-11-20 05:48:30.999344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.352 [2024-11-20 05:48:30.999371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:30.999980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:30.999998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:31.000014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:31.000027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:31.000042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:31.000056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:31.000070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:31.000085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:31.000100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:31.000113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:31.000128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:31.000141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:31.000155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:31.000169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:31.000184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:31.000197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:31.000211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:31.000224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:31.000239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:31.000252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:31.000267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:31.000280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.352 [2024-11-20 05:48:31.000295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.352 [2024-11-20 05:48:31.000307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.000980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.000994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.353 [2024-11-20 05:48:31.001442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.353 [2024-11-20 05:48:31.001464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.354 [2024-11-20 05:48:31.001478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.001492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.354 [2024-11-20 05:48:31.001506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.001521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.354 [2024-11-20 05:48:31.001534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.001550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.354 [2024-11-20 05:48:31.001563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.001578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.354 [2024-11-20 05:48:31.001592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.001606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.354 [2024-11-20 05:48:31.001619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.001635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.354 [2024-11-20 05:48:31.001649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.001666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.354 [2024-11-20 05:48:31.001680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.001695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.354 [2024-11-20 05:48:31.001709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.001741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.001753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118408 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.001767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.001784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.001794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.001804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118416 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.001819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.001832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.001847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.001857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118424 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.001870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.001883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.001892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.001903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118432 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.001916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.001930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.001939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.001949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118440 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.001962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.001975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.001985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.001995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118448 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.002009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.002024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.002033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.002044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118456 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.002058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.002071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.002083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.002093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118464 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.002105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.002119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.002129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.002139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118472 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.002153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.002166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.002176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.002187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118480 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.002202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.002223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.002232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.002243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117720 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.002257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.002270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.002280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.002290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117728 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.002303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.002317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.002326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.002336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117736 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.002348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.002363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.002372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.002382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117744 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.002395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.002408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.002417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.002427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117752 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.002440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.002462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.002474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.002484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117760 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.016400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.016439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.354 [2024-11-20 05:48:31.016465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.354 [2024-11-20 05:48:31.016477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117768 len:8 PRP1 0x0 PRP2 0x0 00:37:22.354 [2024-11-20 05:48:31.016492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.016582] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:37:22.354 [2024-11-20 05:48:31.016668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.354 [2024-11-20 05:48:31.016710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.354 [2024-11-20 05:48:31.016737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.354 [2024-11-20 05:48:31.016760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:31.016782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.355 [2024-11-20 05:48:31.016804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:31.016826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.355 [2024-11-20 05:48:31.016848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:31.016871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:37:22.355 [2024-11-20 05:48:31.016943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1695f30 (9): Bad file descriptor 00:37:22.355 [2024-11-20 05:48:31.022090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:37:22.355 [2024-11-20 05:48:31.044827] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:37:22.355 9689.80 IOPS, 37.85 MiB/s [2024-11-20T05:48:42.274Z] 9842.33 IOPS, 38.45 MiB/s [2024-11-20T05:48:42.274Z] 10137.00 IOPS, 39.60 MiB/s [2024-11-20T05:48:42.274Z] 10343.25 IOPS, 40.40 MiB/s [2024-11-20T05:48:42.274Z] 10508.33 IOPS, 41.05 MiB/s [2024-11-20T05:48:42.274Z] [2024-11-20 05:48:35.571138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.355 [2024-11-20 05:48:35.571181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.571982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.571995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.572010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.572023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.572038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.572051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.572067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.572080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.572095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.572108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.572129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.572161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.572176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.572190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.355 [2024-11-20 05:48:35.572204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.355 [2024-11-20 05:48:35.572218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.356 [2024-11-20 05:48:35.572412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.356 [2024-11-20 05:48:35.572440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.356 [2024-11-20 05:48:35.572476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.356 [2024-11-20 05:48:35.572504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.356 [2024-11-20 05:48:35.572538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.356 [2024-11-20 05:48:35.572566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.356 [2024-11-20 05:48:35.572595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.356 [2024-11-20 05:48:35.572622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.356 [2024-11-20 05:48:35.572650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.572976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.572990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.573005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.573019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.573033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.573046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.573061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.573074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.573089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.573102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.573116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.573129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.573144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.573158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.573173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.573185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.356 [2024-11-20 05:48:35.573200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.356 [2024-11-20 05:48:35.573213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.573975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.573988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.574002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.574016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.574030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.574044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.574058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.574071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.574086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.574099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.574113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.574127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.574141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.574154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.574168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.574182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.574196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.574209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.574224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.574237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.574251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.574264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.574278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.574292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.357 [2024-11-20 05:48:35.574311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.357 [2024-11-20 05:48:35.574324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.358 [2024-11-20 05:48:35.574356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.358 [2024-11-20 05:48:35.574383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.358 [2024-11-20 05:48:35.574411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.358 [2024-11-20 05:48:35.574439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.358 [2024-11-20 05:48:35.574476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:22.358 [2024-11-20 05:48:35.574504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.358 [2024-11-20 05:48:35.574547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17232 len:8 PRP1 0x0 PRP2 0x0 00:37:22.358 [2024-11-20 05:48:35.574561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.358 [2024-11-20 05:48:35.574587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.358 [2024-11-20 05:48:35.574598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17240 len:8 PRP1 0x0 PRP2 0x0 00:37:22.358 [2024-11-20 05:48:35.574611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.358 [2024-11-20 05:48:35.574633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.358 [2024-11-20 05:48:35.574644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:8 PRP1 0x0 PRP2 0x0 00:37:22.358 [2024-11-20 05:48:35.574657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.358 [2024-11-20 05:48:35.574679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.358 [2024-11-20 05:48:35.574689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17256 len:8 PRP1 0x0 PRP2 0x0 00:37:22.358 [2024-11-20 05:48:35.574708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.358 [2024-11-20 05:48:35.574731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.358 [2024-11-20 05:48:35.574741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17264 len:8 PRP1 0x0 PRP2 0x0 00:37:22.358 [2024-11-20 05:48:35.574754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.358 [2024-11-20 05:48:35.574777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.358 [2024-11-20 05:48:35.574789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17272 len:8 PRP1 0x0 PRP2 0x0 00:37:22.358 [2024-11-20 05:48:35.574802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.358 [2024-11-20 05:48:35.574824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.358 [2024-11-20 05:48:35.574834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16336 len:8 PRP1 0x0 PRP2 0x0 00:37:22.358 [2024-11-20 05:48:35.574847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.358 [2024-11-20 05:48:35.574870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.358 [2024-11-20 05:48:35.574880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16344 len:8 PRP1 0x0 PRP2 0x0 00:37:22.358 [2024-11-20 05:48:35.574893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.358 [2024-11-20 05:48:35.574915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.358 [2024-11-20 05:48:35.574925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16352 len:8 PRP1 0x0 PRP2 0x0 00:37:22.358 [2024-11-20 05:48:35.574938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.358 [2024-11-20 05:48:35.574961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.358 [2024-11-20 05:48:35.574970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16360 len:8 PRP1 0x0 PRP2 0x0 00:37:22.358 [2024-11-20 05:48:35.574983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.574997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.358 [2024-11-20 05:48:35.575006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.358 [2024-11-20 05:48:35.575016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16368 len:8 PRP1 0x0 PRP2 0x0 00:37:22.358 [2024-11-20 05:48:35.575029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.575042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.358 [2024-11-20 05:48:35.575056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.358 [2024-11-20 05:48:35.575066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16376 len:8 PRP1 0x0 PRP2 0x0 00:37:22.358 [2024-11-20 05:48:35.575079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.575092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:22.358 [2024-11-20 05:48:35.575102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:22.358 [2024-11-20 05:48:35.575111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:8 PRP1 0x0 PRP2 0x0 00:37:22.358 [2024-11-20 05:48:35.575124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.575178] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:37:22.358 [2024-11-20 05:48:35.575231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.358 [2024-11-20 05:48:35.575249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.575263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.358 [2024-11-20 05:48:35.575276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.575290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.358 [2024-11-20 05:48:35.575303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.575317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:22.358 [2024-11-20 05:48:35.575330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:22.358 [2024-11-20 05:48:35.575344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:37:22.358 [2024-11-20 05:48:35.575408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1695f30 (9): Bad file descriptor 00:37:22.358 [2024-11-20 05:48:35.578164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:37:22.358 [2024-11-20 05:48:35.608209] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:37:22.358 10344.50 IOPS, 40.41 MiB/s [2024-11-20T05:48:42.277Z] 10290.45 IOPS, 40.20 MiB/s [2024-11-20T05:48:42.277Z] 10239.25 IOPS, 40.00 MiB/s [2024-11-20T05:48:42.277Z] 10196.92 IOPS, 39.83 MiB/s [2024-11-20T05:48:42.277Z] 10155.43 IOPS, 39.67 MiB/s [2024-11-20T05:48:42.277Z] 10123.87 IOPS, 39.55 MiB/s 00:37:22.358 Latency(us) 00:37:22.358 [2024-11-20T05:48:42.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:22.358 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:22.358 Verification LBA range: start 0x0 length 0x4000 00:37:22.358 NVMe0n1 : 15.01 10126.38 39.56 267.43 0.00 12291.94 485.67 25340.59 00:37:22.358 [2024-11-20T05:48:42.277Z] =================================================================================================================== 00:37:22.358 [2024-11-20T05:48:42.277Z] Total : 10126.38 39.56 267.43 0.00 12291.94 485.67 25340.59 00:37:22.358 Received shutdown signal, test time was about 15.000000 seconds 00:37:22.358 00:37:22.358 Latency(us) 00:37:22.358 [2024-11-20T05:48:42.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:22.359 [2024-11-20T05:48:42.278Z] =================================================================================================================== 00:37:22.359 [2024-11-20T05:48:42.278Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:22.359 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:37:22.359 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:37:22.359 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:37:22.359 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=89074 00:37:22.359 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 89074 /var/tmp/bdevperf.sock 00:37:22.359 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:37:22.359 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 89074 ']' 00:37:22.359 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:22.359 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:22.359 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:22.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:22.359 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:22.359 05:48:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:22.926 05:48:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:22.926 05:48:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:37:22.926 05:48:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:37:22.926 [2024-11-20 05:48:42.745616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:37:22.926 05:48:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:37:23.185 [2024-11-20 05:48:42.953881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:37:23.185 05:48:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:37:23.443 NVMe0n1 00:37:23.443 05:48:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:37:24.011 00:37:24.011 05:48:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:37:24.269 00:37:24.269 05:48:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:37:24.269 05:48:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:24.527 05:48:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:24.785 05:48:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:37:28.068 05:48:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:37:28.068 05:48:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:28.068 05:48:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=89211 00:37:28.068 05:48:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 89211 00:37:28.068 05:48:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:29.003 { 00:37:29.003 "results": [ 00:37:29.003 { 00:37:29.003 "job": "NVMe0n1", 00:37:29.003 "core_mask": "0x1", 00:37:29.003 "workload": "verify", 00:37:29.003 "status": "finished", 00:37:29.003 "verify_range": { 00:37:29.003 "start": 0, 00:37:29.003 "length": 16384 00:37:29.003 }, 00:37:29.003 "queue_depth": 128, 00:37:29.003 "io_size": 4096, 00:37:29.003 "runtime": 1.008664, 00:37:29.003 "iops": 9558.187860377688, 00:37:29.003 "mibps": 37.33667132960034, 00:37:29.003 "io_failed": 0, 00:37:29.003 "io_timeout": 0, 00:37:29.003 "avg_latency_us": 13329.718966517008, 00:37:29.003 "min_latency_us": 1732.0228571428572, 00:37:29.003 "max_latency_us": 13731.352380952381 00:37:29.003 } 00:37:29.003 ], 00:37:29.003 "core_count": 1 00:37:29.003 } 00:37:29.003 05:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:37:29.003 [2024-11-20 05:48:41.577061] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:29.003 [2024-11-20 05:48:41.577224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89074 ] 00:37:29.003 [2024-11-20 05:48:41.732279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.003 [2024-11-20 05:48:41.780181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.003 [2024-11-20 05:48:44.444378] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:37:29.003 [2024-11-20 05:48:44.444496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:29.003 [2024-11-20 05:48:44.444518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:29.003 [2024-11-20 05:48:44.444534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:29.003 [2024-11-20 05:48:44.444548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:29.003 [2024-11-20 05:48:44.444562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:29.003 [2024-11-20 05:48:44.444576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:29.003 [2024-11-20 05:48:44.444589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:29.003 [2024-11-20 05:48:44.444602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:29.003 [2024-11-20 05:48:44.444616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:37:29.003 [2024-11-20 05:48:44.444661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:37:29.003 [2024-11-20 05:48:44.444684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4df30 (9): Bad file descriptor 00:37:29.003 [2024-11-20 05:48:44.453413] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:37:29.003 Running I/O for 1 seconds... 00:37:29.003 9513.00 IOPS, 37.16 MiB/s 00:37:29.003 Latency(us) 00:37:29.003 [2024-11-20T05:48:48.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:29.003 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:29.003 Verification LBA range: start 0x0 length 0x4000 00:37:29.003 NVMe0n1 : 1.01 9558.19 37.34 0.00 0.00 13329.72 1732.02 13731.35 00:37:29.003 [2024-11-20T05:48:48.922Z] =================================================================================================================== 00:37:29.003 [2024-11-20T05:48:48.922Z] Total : 9558.19 37.34 0.00 0.00 13329.72 1732.02 13731.35 00:37:29.003 05:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:37:29.003 05:48:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:29.570 05:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:29.570 05:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:29.570 05:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:37:29.828 05:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:37:30.086 05:48:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:37:33.371 05:48:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:37:33.371 05:48:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:37:33.371 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 89074 00:37:33.371 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 89074 ']' 00:37:33.371 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 89074 00:37:33.371 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:37:33.371 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:33.371 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89074 00:37:33.371 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:33.371 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:33.371 killing process with pid 89074 00:37:33.372 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89074' 00:37:33.372 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 89074 00:37:33.372 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 89074 00:37:33.630 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:37:33.630 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:33.889 rmmod nvme_tcp 00:37:33.889 rmmod nvme_fabrics 00:37:33.889 rmmod nvme_keyring 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 88711 ']' 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 88711 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 88711 ']' 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 88711 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:33.889 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88711 00:37:34.147 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:34.147 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:34.147 killing process with pid 88711 00:37:34.147 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88711' 00:37:34.147 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 88711 00:37:34.147 05:48:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 88711 00:37:34.148 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:34.148 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:34.148 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:34.148 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:37:34.148 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:37:34.148 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:34.148 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:37:34.148 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:34.148 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:37:34.148 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:37:34.148 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:37:34.148 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:37:34.405 00:37:34.405 real 0m33.424s 00:37:34.405 user 2m7.200s 00:37:34.405 sys 0m6.184s 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:37:34.405 ************************************ 00:37:34.405 END TEST nvmf_failover 00:37:34.405 ************************************ 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:34.405 05:48:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:34.664 ************************************ 00:37:34.664 START TEST nvmf_host_discovery 00:37:34.664 ************************************ 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:37:34.664 * Looking for test storage... 00:37:34.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:34.664 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:34.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.665 --rc genhtml_branch_coverage=1 00:37:34.665 --rc genhtml_function_coverage=1 00:37:34.665 --rc genhtml_legend=1 00:37:34.665 --rc geninfo_all_blocks=1 00:37:34.665 --rc geninfo_unexecuted_blocks=1 00:37:34.665 00:37:34.665 ' 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:34.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.665 --rc genhtml_branch_coverage=1 00:37:34.665 --rc genhtml_function_coverage=1 00:37:34.665 --rc genhtml_legend=1 00:37:34.665 --rc geninfo_all_blocks=1 00:37:34.665 --rc geninfo_unexecuted_blocks=1 00:37:34.665 00:37:34.665 ' 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:34.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.665 --rc genhtml_branch_coverage=1 00:37:34.665 --rc genhtml_function_coverage=1 00:37:34.665 --rc genhtml_legend=1 00:37:34.665 --rc geninfo_all_blocks=1 00:37:34.665 --rc geninfo_unexecuted_blocks=1 00:37:34.665 00:37:34.665 ' 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:34.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:34.665 --rc genhtml_branch_coverage=1 00:37:34.665 --rc genhtml_function_coverage=1 00:37:34.665 --rc genhtml_legend=1 00:37:34.665 --rc geninfo_all_blocks=1 00:37:34.665 --rc geninfo_unexecuted_blocks=1 00:37:34.665 00:37:34.665 ' 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:34.665 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:34.665 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:37:34.666 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:37:34.666 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:37:34.666 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:37:34.666 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:37:34.666 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:37:34.666 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:37:34.666 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:34.666 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:34.666 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:34.666 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:34.666 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:34.666 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:34.666 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:34.666 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:34.924 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:37:34.924 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:37:34.924 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:37:34.924 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:37:34.924 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:37:34.924 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:37:34.924 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:34.924 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:37:34.924 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:37:34.924 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:37:34.924 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:34.924 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:37:34.925 Cannot find device "nvmf_init_br" 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:37:34.925 Cannot find device "nvmf_init_br2" 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:37:34.925 Cannot find device "nvmf_tgt_br" 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:37:34.925 Cannot find device "nvmf_tgt_br2" 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:37:34.925 Cannot find device "nvmf_init_br" 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:37:34.925 Cannot find device "nvmf_init_br2" 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:37:34.925 Cannot find device "nvmf_tgt_br" 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:37:34.925 Cannot find device "nvmf_tgt_br2" 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:37:34.925 Cannot find device "nvmf_br" 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:37:34.925 Cannot find device "nvmf_init_if" 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:37:34.925 Cannot find device "nvmf_init_if2" 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:34.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:34.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:37:34.925 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:37:35.184 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:35.184 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:37:35.184 00:37:35.184 --- 10.0.0.3 ping statistics --- 00:37:35.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.184 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:37:35.184 05:48:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:37:35.184 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:37:35.184 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:37:35.184 00:37:35.184 --- 10.0.0.4 ping statistics --- 00:37:35.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.184 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:35.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:35.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:37:35.184 00:37:35.184 --- 10.0.0.1 ping statistics --- 00:37:35.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.184 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:37:35.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:35.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:37:35.184 00:37:35.184 --- 10.0.0.2 ping statistics --- 00:37:35.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.184 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=89573 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 89573 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 89573 ']' 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:35.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:35.184 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.443 [2024-11-20 05:48:55.103016] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:35.443 [2024-11-20 05:48:55.103117] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:35.443 [2024-11-20 05:48:55.256509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.443 [2024-11-20 05:48:55.299236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:35.443 [2024-11-20 05:48:55.299287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:35.443 [2024-11-20 05:48:55.299296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:35.443 [2024-11-20 05:48:55.299304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:35.443 [2024-11-20 05:48:55.299311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:35.443 [2024-11-20 05:48:55.299626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.702 [2024-11-20 05:48:55.462644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.702 [2024-11-20 05:48:55.470768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.702 null0 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.702 null1 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=89605 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 89605 /tmp/host.sock 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 89605 ']' 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:35.702 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:35.702 05:48:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:35.702 [2024-11-20 05:48:55.564231] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:35.702 [2024-11-20 05:48:55.564335] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89605 ] 00:37:35.961 [2024-11-20 05:48:55.722331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.961 [2024-11-20 05:48:55.811948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.900 [2024-11-20 05:48:56.771001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:36.900 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:36.901 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:36.901 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:36.901 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:37.160 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:37.161 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:37.161 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:37.161 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:37.161 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:37:37.161 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:37.161 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.161 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:37.161 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:37.161 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:37.161 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:37.161 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.161 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:37:37.161 05:48:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:37:37.730 [2024-11-20 05:48:57.475417] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:37:37.730 [2024-11-20 05:48:57.475461] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:37:37.730 [2024-11-20 05:48:57.475488] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:37:37.730 [2024-11-20 05:48:57.562558] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:37:37.730 [2024-11-20 05:48:57.617481] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:37:37.730 [2024-11-20 05:48:57.618196] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x640ba0:1 started. 00:37:37.730 [2024-11-20 05:48:57.619935] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:37:37.730 [2024-11-20 05:48:57.619963] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:37:37.730 [2024-11-20 05:48:57.624030] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x640ba0 was disconnected and freed. delete nvme_qpair. 00:37:38.297 05:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:38.298 05:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:38.298 05:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:37:38.298 05:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:38.298 05:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:38.298 05:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.298 05:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:38.298 05:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:38.298 05:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:38.298 05:48:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:38.298 [2024-11-20 05:48:58.178564] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x5bb260:1 started. 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:37:38.298 [2024-11-20 05:48:58.184116] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x5bb260 was disconnected and freed. delete nvme_qpair. 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:38.298 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:38.557 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.557 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:38.557 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:38.557 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:37:38.557 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:37:38.557 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:38.557 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:38.557 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:38.557 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:38.557 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:38.557 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:37:38.557 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:38.558 [2024-11-20 05:48:58.295769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:37:38.558 [2024-11-20 05:48:58.295916] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:37:38.558 [2024-11-20 05:48:58.295947] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:38.558 [2024-11-20 05:48:58.381968] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.558 [2024-11-20 05:48:58.440355] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:37:38.558 [2024-11-20 05:48:58.440414] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:37:38.558 [2024-11-20 05:48:58.440430] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:37:38.558 [2024-11-20 05:48:58.440439] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:37:38.558 05:48:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.932 [2024-11-20 05:48:59.566810] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:37:39.932 [2024-11-20 05:48:59.566846] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:37:39.932 [2024-11-20 05:48:59.567701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:39.932 [2024-11-20 05:48:59.567743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:39.932 [2024-11-20 05:48:59.567764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:39.932 [2024-11-20 05:48:59.567778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:39.932 [2024-11-20 05:48:59.567792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:39.932 [2024-11-20 05:48:59.567806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:39.932 [2024-11-20 05:48:59.567822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:39.932 [2024-11-20 05:48:59.567837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:39.932 [2024-11-20 05:48:59.567853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613280 is same with the state(6) to be set 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.932 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:39.933 [2024-11-20 05:48:59.577657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613280 (9): Bad file descriptor 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:39.933 [2024-11-20 05:48:59.587673] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:37:39.933 [2024-11-20 05:48:59.587703] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:37:39.933 [2024-11-20 05:48:59.587712] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:37:39.933 [2024-11-20 05:48:59.587721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:39.933 [2024-11-20 05:48:59.587760] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:37:39.933 [2024-11-20 05:48:59.587845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.933 [2024-11-20 05:48:59.587868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613280 with addr=10.0.0.3, port=4420 00:37:39.933 [2024-11-20 05:48:59.587882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613280 is same with the state(6) to be set 00:37:39.933 [2024-11-20 05:48:59.587901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613280 (9): Bad file descriptor 00:37:39.933 [2024-11-20 05:48:59.587918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:39.933 [2024-11-20 05:48:59.587930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:39.933 [2024-11-20 05:48:59.587944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:39.933 [2024-11-20 05:48:59.587955] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:37:39.933 [2024-11-20 05:48:59.587963] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:37:39.933 [2024-11-20 05:48:59.587971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.933 [2024-11-20 05:48:59.597772] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:37:39.933 [2024-11-20 05:48:59.597799] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:37:39.933 [2024-11-20 05:48:59.597809] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:37:39.933 [2024-11-20 05:48:59.597817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:39.933 [2024-11-20 05:48:59.597855] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:37:39.933 [2024-11-20 05:48:59.597925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.933 [2024-11-20 05:48:59.597951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613280 with addr=10.0.0.3, port=4420 00:37:39.933 [2024-11-20 05:48:59.597969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613280 is same with the state(6) to be set 00:37:39.933 [2024-11-20 05:48:59.597994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613280 (9): Bad file descriptor 00:37:39.933 [2024-11-20 05:48:59.598017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:39.933 [2024-11-20 05:48:59.598043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:39.933 [2024-11-20 05:48:59.598059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:39.933 [2024-11-20 05:48:59.598072] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:37:39.933 [2024-11-20 05:48:59.598082] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:37:39.933 [2024-11-20 05:48:59.598091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:39.933 [2024-11-20 05:48:59.607868] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:37:39.933 [2024-11-20 05:48:59.607895] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:37:39.933 [2024-11-20 05:48:59.607905] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:37:39.933 [2024-11-20 05:48:59.607913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:39.933 [2024-11-20 05:48:59.607952] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:37:39.933 [2024-11-20 05:48:59.608020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.933 [2024-11-20 05:48:59.608043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613280 with addr=10.0.0.3, port=4420 00:37:39.933 [2024-11-20 05:48:59.608054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613280 is same with the state(6) to be set 00:37:39.933 [2024-11-20 05:48:59.608073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613280 (9): Bad file descriptor 00:37:39.933 [2024-11-20 05:48:59.608095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:39.933 [2024-11-20 05:48:59.608108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:39.933 [2024-11-20 05:48:59.608123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:39.933 [2024-11-20 05:48:59.608135] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:37:39.933 [2024-11-20 05:48:59.608145] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:37:39.933 [2024-11-20 05:48:59.608152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:39.933 [2024-11-20 05:48:59.617962] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:37:39.933 [2024-11-20 05:48:59.617987] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:37:39.933 [2024-11-20 05:48:59.617997] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:37:39.933 [2024-11-20 05:48:59.618005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:39.933 [2024-11-20 05:48:59.618041] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:37:39.933 [2024-11-20 05:48:59.618106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.933 [2024-11-20 05:48:59.618131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613280 with addr=10.0.0.3, port=4420 00:37:39.933 [2024-11-20 05:48:59.618148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613280 is same with the state(6) to be set 00:37:39.933 [2024-11-20 05:48:59.618170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613280 (9): Bad file descriptor 00:37:39.933 [2024-11-20 05:48:59.618198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:39.933 [2024-11-20 05:48:59.618211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:39.933 [2024-11-20 05:48:59.618226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:39.933 [2024-11-20 05:48:59.618239] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:37:39.933 [2024-11-20 05:48:59.618248] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:37:39.933 [2024-11-20 05:48:59.618255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:39.933 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.933 [2024-11-20 05:48:59.628053] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:37:39.933 [2024-11-20 05:48:59.628080] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:37:39.933 [2024-11-20 05:48:59.628090] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:37:39.933 [2024-11-20 05:48:59.628098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:39.933 [2024-11-20 05:48:59.628131] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:37:39.933 [2024-11-20 05:48:59.628200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.933 [2024-11-20 05:48:59.628224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613280 with addr=10.0.0.3, port=4420 00:37:39.933 [2024-11-20 05:48:59.628240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613280 is same with the state(6) to be set 00:37:39.933 [2024-11-20 05:48:59.628263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613280 (9): Bad file descriptor 00:37:39.933 [2024-11-20 05:48:59.628298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:39.933 [2024-11-20 05:48:59.628314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:39.933 [2024-11-20 05:48:59.628329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:39.933 [2024-11-20 05:48:59.628343] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:37:39.933 [2024-11-20 05:48:59.628351] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:37:39.933 [2024-11-20 05:48:59.628358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:39.933 [2024-11-20 05:48:59.638139] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:37:39.933 [2024-11-20 05:48:59.638165] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:37:39.933 [2024-11-20 05:48:59.638175] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:37:39.933 [2024-11-20 05:48:59.638183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:39.933 [2024-11-20 05:48:59.638217] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:37:39.933 [2024-11-20 05:48:59.638282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.933 [2024-11-20 05:48:59.638308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613280 with addr=10.0.0.3, port=4420 00:37:39.933 [2024-11-20 05:48:59.638326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613280 is same with the state(6) to be set 00:37:39.933 [2024-11-20 05:48:59.638348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613280 (9): Bad file descriptor 00:37:39.933 [2024-11-20 05:48:59.638367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:39.933 [2024-11-20 05:48:59.638375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:39.933 [2024-11-20 05:48:59.638385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:39.933 [2024-11-20 05:48:59.638393] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:37:39.933 [2024-11-20 05:48:59.638400] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:37:39.933 [2024-11-20 05:48:59.638408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:39.933 [2024-11-20 05:48:59.648226] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:37:39.933 [2024-11-20 05:48:59.648252] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:37:39.933 [2024-11-20 05:48:59.648261] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:37:39.933 [2024-11-20 05:48:59.648269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:39.933 [2024-11-20 05:48:59.648305] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:37:39.933 [2024-11-20 05:48:59.648369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:39.933 [2024-11-20 05:48:59.648396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613280 with addr=10.0.0.3, port=4420 00:37:39.933 [2024-11-20 05:48:59.648412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x613280 is same with the state(6) to be set 00:37:39.933 [2024-11-20 05:48:59.648434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x613280 (9): Bad file descriptor 00:37:39.934 [2024-11-20 05:48:59.648460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:39.934 [2024-11-20 05:48:59.648472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:39.934 [2024-11-20 05:48:59.648485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:39.934 [2024-11-20 05:48:59.648499] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:37:39.934 [2024-11-20 05:48:59.648509] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:37:39.934 [2024-11-20 05:48:59.648516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:39.934 [2024-11-20 05:48:59.652581] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:37:39.934 [2024-11-20 05:48:59.652619] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:39.934 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:40.193 05:48:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:41.129 [2024-11-20 05:49:00.945786] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:37:41.129 [2024-11-20 05:49:00.945828] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:37:41.129 [2024-11-20 05:49:00.945852] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:37:41.129 [2024-11-20 05:49:01.031880] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:37:41.388 [2024-11-20 05:49:01.090225] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:37:41.388 [2024-11-20 05:49:01.090735] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x778fb0:1 started. 00:37:41.388 [2024-11-20 05:49:01.092722] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:37:41.388 [2024-11-20 05:49:01.092764] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:41.388 [2024-11-20 05:49:01.094704] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x778fb0 was disconnected and freed. delete nvme_qpair. 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:41.388 2024/11/20 05:49:01 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:37:41.388 request: 00:37:41.388 { 00:37:41.388 "method": "bdev_nvme_start_discovery", 00:37:41.388 "params": { 00:37:41.388 "name": "nvme", 00:37:41.388 "trtype": "tcp", 00:37:41.388 "traddr": "10.0.0.3", 00:37:41.388 "adrfam": "ipv4", 00:37:41.388 "trsvcid": "8009", 00:37:41.388 "hostnqn": "nqn.2021-12.io.spdk:test", 00:37:41.388 "wait_for_attach": true 00:37:41.388 } 00:37:41.388 } 00:37:41.388 Got JSON-RPC error response 00:37:41.388 GoRPCClient: error on JSON-RPC call 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:37:41.388 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:41.389 2024/11/20 05:49:01 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:37:41.389 request: 00:37:41.389 { 00:37:41.389 "method": "bdev_nvme_start_discovery", 00:37:41.389 "params": { 00:37:41.389 "name": "nvme_second", 00:37:41.389 "trtype": "tcp", 00:37:41.389 "traddr": "10.0.0.3", 00:37:41.389 "adrfam": "ipv4", 00:37:41.389 "trsvcid": "8009", 00:37:41.389 "hostnqn": "nqn.2021-12.io.spdk:test", 00:37:41.389 "wait_for_attach": true 00:37:41.389 } 00:37:41.389 } 00:37:41.389 Got JSON-RPC error response 00:37:41.389 GoRPCClient: error on JSON-RPC call 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:41.389 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:41.647 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.647 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:41.647 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:37:41.647 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:37:41.647 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:37:41.648 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:41.648 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.648 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:41.648 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.648 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:37:41.648 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.648 05:49:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:42.583 [2024-11-20 05:49:02.337193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:42.583 [2024-11-20 05:49:02.337247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64cb40 with addr=10.0.0.3, port=8010 00:37:42.583 [2024-11-20 05:49:02.337277] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:42.583 [2024-11-20 05:49:02.337292] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:42.583 [2024-11-20 05:49:02.337304] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:37:43.578 [2024-11-20 05:49:03.337189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:43.578 [2024-11-20 05:49:03.337238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64cb40 with addr=10.0.0.3, port=8010 00:37:43.578 [2024-11-20 05:49:03.337266] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:43.578 [2024-11-20 05:49:03.337280] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:43.578 [2024-11-20 05:49:03.337292] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:37:44.513 [2024-11-20 05:49:04.337111] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:37:44.513 2024/11/20 05:49:04 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:37:44.513 request: 00:37:44.513 { 00:37:44.513 "method": "bdev_nvme_start_discovery", 00:37:44.513 "params": { 00:37:44.513 "name": "nvme_second", 00:37:44.513 "trtype": "tcp", 00:37:44.513 "traddr": "10.0.0.3", 00:37:44.513 "adrfam": "ipv4", 00:37:44.513 "trsvcid": "8010", 00:37:44.513 "hostnqn": "nqn.2021-12.io.spdk:test", 00:37:44.513 "wait_for_attach": false, 00:37:44.513 "attach_timeout_ms": 3000 00:37:44.513 } 00:37:44.513 } 00:37:44.513 Got JSON-RPC error response 00:37:44.513 GoRPCClient: error on JSON-RPC call 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 89605 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:44.513 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:44.772 rmmod nvme_tcp 00:37:44.772 rmmod nvme_fabrics 00:37:44.772 rmmod nvme_keyring 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 89573 ']' 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 89573 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 89573 ']' 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 89573 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89573 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:44.772 killing process with pid 89573 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89573' 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 89573 00:37:44.772 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 89573 00:37:45.030 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:45.031 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:45.290 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:37:45.290 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:45.290 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:45.290 05:49:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:45.290 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:37:45.290 00:37:45.290 real 0m10.676s 00:37:45.290 user 0m20.053s 00:37:45.290 sys 0m2.268s 00:37:45.290 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:45.290 ************************************ 00:37:45.290 END TEST nvmf_host_discovery 00:37:45.290 ************************************ 00:37:45.290 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:45.290 05:49:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:37:45.290 05:49:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:45.290 05:49:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:45.290 05:49:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:45.290 ************************************ 00:37:45.290 START TEST nvmf_host_multipath_status 00:37:45.290 ************************************ 00:37:45.290 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:37:45.290 * Looking for test storage... 00:37:45.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:37:45.290 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:45.290 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:37:45.290 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:45.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:45.549 --rc genhtml_branch_coverage=1 00:37:45.549 --rc genhtml_function_coverage=1 00:37:45.549 --rc genhtml_legend=1 00:37:45.549 --rc geninfo_all_blocks=1 00:37:45.549 --rc geninfo_unexecuted_blocks=1 00:37:45.549 00:37:45.549 ' 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:45.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:45.549 --rc genhtml_branch_coverage=1 00:37:45.549 --rc genhtml_function_coverage=1 00:37:45.549 --rc genhtml_legend=1 00:37:45.549 --rc geninfo_all_blocks=1 00:37:45.549 --rc geninfo_unexecuted_blocks=1 00:37:45.549 00:37:45.549 ' 00:37:45.549 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:45.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:45.549 --rc genhtml_branch_coverage=1 00:37:45.550 --rc genhtml_function_coverage=1 00:37:45.550 --rc genhtml_legend=1 00:37:45.550 --rc geninfo_all_blocks=1 00:37:45.550 --rc geninfo_unexecuted_blocks=1 00:37:45.550 00:37:45.550 ' 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:45.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:45.550 --rc genhtml_branch_coverage=1 00:37:45.550 --rc genhtml_function_coverage=1 00:37:45.550 --rc genhtml_legend=1 00:37:45.550 --rc geninfo_all_blocks=1 00:37:45.550 --rc geninfo_unexecuted_blocks=1 00:37:45.550 00:37:45.550 ' 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:45.550 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:45.550 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:37:45.551 Cannot find device "nvmf_init_br" 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:37:45.551 Cannot find device "nvmf_init_br2" 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:37:45.551 Cannot find device "nvmf_tgt_br" 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:37:45.551 Cannot find device "nvmf_tgt_br2" 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:37:45.551 Cannot find device "nvmf_init_br" 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:37:45.551 Cannot find device "nvmf_init_br2" 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:37:45.551 Cannot find device "nvmf_tgt_br" 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:37:45.551 Cannot find device "nvmf_tgt_br2" 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:37:45.551 Cannot find device "nvmf_br" 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:37:45.551 Cannot find device "nvmf_init_if" 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:37:45.551 Cannot find device "nvmf_init_if2" 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:45.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:45.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:37:45.551 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:37:45.810 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:45.810 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:37:45.810 00:37:45.810 --- 10.0.0.3 ping statistics --- 00:37:45.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.810 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:37:45.810 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:37:45.810 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.028 ms 00:37:45.810 00:37:45.810 --- 10.0.0.4 ping statistics --- 00:37:45.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.810 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:45.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:45.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:37:45.810 00:37:45.810 --- 10.0.0.1 ping statistics --- 00:37:45.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.810 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:37:45.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:45.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:37:45.810 00:37:45.810 --- 10.0.0.2 ping statistics --- 00:37:45.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.810 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=90136 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 90136 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 90136 ']' 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:45.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:45.810 05:49:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:46.068 [2024-11-20 05:49:05.742796] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:46.068 [2024-11-20 05:49:05.742860] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:46.068 [2024-11-20 05:49:05.883294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:46.068 [2024-11-20 05:49:05.925722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:46.068 [2024-11-20 05:49:05.925915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:46.068 [2024-11-20 05:49:05.925970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:46.068 [2024-11-20 05:49:05.926020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:46.068 [2024-11-20 05:49:05.926074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:46.068 [2024-11-20 05:49:05.927094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:46.068 [2024-11-20 05:49:05.927094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:46.326 05:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:46.326 05:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:37:46.326 05:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:46.326 05:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:46.326 05:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:46.326 05:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:46.326 05:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=90136 00:37:46.326 05:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:46.609 [2024-11-20 05:49:06.351561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:46.609 05:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:37:46.867 Malloc0 00:37:46.867 05:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:37:47.125 05:49:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:47.384 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:37:47.643 [2024-11-20 05:49:07.340608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:37:47.643 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:37:47.643 [2024-11-20 05:49:07.528990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:37:47.643 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:37:47.643 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=90222 00:37:47.643 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:47.643 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 90222 /var/tmp/bdevperf.sock 00:37:47.643 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 90222 ']' 00:37:47.643 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:47.643 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:47.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:47.643 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:47.643 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:47.643 05:49:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:49.019 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:49.019 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:37:49.019 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:37:49.019 05:49:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:37:49.278 Nvme0n1 00:37:49.278 05:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:37:49.536 Nvme0n1 00:37:49.536 05:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:37:49.536 05:49:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:37:52.067 05:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:37:52.067 05:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:37:52.067 05:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:37:52.067 05:49:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:37:53.002 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:37:53.003 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:37:53.003 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:53.003 05:49:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:53.261 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:53.261 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:37:53.261 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:53.261 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:53.519 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:53.519 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:53.519 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:53.519 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:53.778 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:53.778 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:53.778 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:53.778 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:54.036 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:54.036 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:54.036 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:54.036 05:49:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:54.295 05:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:54.295 05:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:54.295 05:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:54.295 05:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:54.553 05:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:54.553 05:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:37:54.553 05:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:37:54.812 05:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:37:55.070 05:49:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:37:56.005 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:37:56.005 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:37:56.005 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:56.005 05:49:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:56.263 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:56.264 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:37:56.264 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:56.264 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:56.522 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:56.522 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:56.522 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:56.522 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:56.781 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:56.781 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:56.781 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:56.781 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:57.040 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:57.040 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:57.040 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:57.040 05:49:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:57.299 05:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:57.299 05:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:57.299 05:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:57.299 05:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:57.558 05:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:57.558 05:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:37:57.558 05:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:37:57.816 05:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:37:58.075 05:49:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:37:59.009 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:37:59.009 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:37:59.009 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:59.009 05:49:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:59.267 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:59.267 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:37:59.267 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:59.268 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:59.526 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:59.526 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:59.526 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:59.526 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:59.784 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:59.784 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:59.784 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:59.785 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:00.043 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:00.043 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:00.043 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:00.043 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:00.302 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:00.302 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:00.302 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:00.302 05:49:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:00.560 05:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:00.561 05:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:38:00.561 05:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:38:00.561 05:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:38:00.819 05:49:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:38:01.756 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:38:01.756 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:01.756 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:01.756 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:02.321 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:02.321 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:02.321 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:02.321 05:49:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:02.321 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:02.321 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:02.321 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:02.321 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:02.578 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:02.578 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:02.578 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:02.578 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:02.837 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:02.837 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:02.837 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:02.837 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:03.109 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:03.109 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:38:03.109 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:03.109 05:49:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:03.404 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:03.404 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:38:03.404 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:38:03.705 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:38:03.972 05:49:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:38:04.909 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:38:04.909 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:04.909 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:04.909 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:05.168 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:05.168 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:05.168 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:05.168 05:49:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:05.427 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:05.427 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:05.427 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:05.427 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:05.685 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:05.685 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:05.685 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:05.685 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:05.944 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:05.944 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:38:05.944 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:05.944 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:06.202 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:06.202 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:38:06.202 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:06.202 05:49:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:06.462 05:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:06.462 05:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:38:06.462 05:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:38:06.720 05:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:38:06.978 05:49:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:38:07.915 05:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:38:07.915 05:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:07.915 05:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:07.915 05:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:08.174 05:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:08.174 05:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:08.174 05:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:08.174 05:49:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:08.433 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:08.433 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:08.433 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:08.433 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:08.692 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:08.692 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:08.692 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:08.692 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:08.950 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:08.950 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:38:08.950 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:08.950 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:08.950 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:08.950 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:08.950 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:08.950 05:49:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:09.209 05:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:09.209 05:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:38:09.482 05:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:38:09.482 05:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:38:09.742 05:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:38:10.000 05:49:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:38:10.936 05:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:38:10.936 05:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:10.936 05:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:10.936 05:49:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:11.195 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:11.195 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:11.195 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:11.195 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:11.455 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:11.455 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:11.455 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:11.455 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:11.714 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:11.714 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:11.714 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:11.714 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:11.973 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:11.974 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:11.974 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:11.974 05:49:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:12.233 05:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:12.233 05:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:12.233 05:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:12.233 05:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:12.492 05:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:12.492 05:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:38:12.492 05:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:38:12.752 05:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:38:13.010 05:49:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:38:13.945 05:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:38:13.945 05:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:38:13.945 05:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:13.945 05:49:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:14.203 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:14.203 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:14.203 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:14.203 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:14.461 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:14.461 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:14.461 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:14.461 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:14.719 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:14.720 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:14.720 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:14.720 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:15.286 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:15.286 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:15.286 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:15.286 05:49:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:15.286 05:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:15.286 05:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:15.286 05:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:15.286 05:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:15.544 05:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:15.544 05:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:38:15.544 05:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:38:15.803 05:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:38:16.063 05:49:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:38:17.000 05:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:38:17.000 05:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:17.000 05:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:17.000 05:49:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:17.259 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:17.259 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:38:17.259 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:17.259 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:17.518 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:17.518 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:17.518 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:17.518 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:17.777 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:17.777 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:17.777 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:17.777 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:18.037 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:18.037 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:18.037 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:18.037 05:49:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:18.296 05:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:18.296 05:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:38:18.296 05:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:18.296 05:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:18.555 05:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:18.555 05:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:38:18.555 05:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:38:18.813 05:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:38:19.072 05:49:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:38:20.008 05:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:38:20.008 05:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:38:20.008 05:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:20.008 05:49:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:38:20.266 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:20.266 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:38:20.266 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:20.266 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:38:20.526 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:20.526 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:38:20.526 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:20.526 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:38:20.785 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:20.785 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:38:20.785 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:20.785 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:38:21.089 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:21.089 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:38:21.089 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:38:21.089 05:49:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:21.370 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:38:21.370 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:38:21.370 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:38:21.370 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:38:21.628 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:38:21.628 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 90222 00:38:21.628 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 90222 ']' 00:38:21.628 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 90222 00:38:21.628 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:38:21.628 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:21.628 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90222 00:38:21.628 killing process with pid 90222 00:38:21.628 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:38:21.628 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:38:21.628 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90222' 00:38:21.628 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 90222 00:38:21.628 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 90222 00:38:21.629 { 00:38:21.629 "results": [ 00:38:21.629 { 00:38:21.629 "job": "Nvme0n1", 00:38:21.629 "core_mask": "0x4", 00:38:21.629 "workload": "verify", 00:38:21.629 "status": "terminated", 00:38:21.629 "verify_range": { 00:38:21.629 "start": 0, 00:38:21.629 "length": 16384 00:38:21.629 }, 00:38:21.629 "queue_depth": 128, 00:38:21.629 "io_size": 4096, 00:38:21.629 "runtime": 31.985695, 00:38:21.629 "iops": 10664.798748315457, 00:38:21.629 "mibps": 41.659370110607256, 00:38:21.629 "io_failed": 0, 00:38:21.629 "io_timeout": 0, 00:38:21.629 "avg_latency_us": 11980.298017731733, 00:38:21.629 "min_latency_us": 133.60761904761904, 00:38:21.629 "max_latency_us": 4026531.84 00:38:21.629 } 00:38:21.629 ], 00:38:21.629 "core_count": 1 00:38:21.629 } 00:38:21.890 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 90222 00:38:21.890 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:38:21.890 [2024-11-20 05:49:07.582639] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:38:21.890 [2024-11-20 05:49:07.582729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90222 ] 00:38:21.890 [2024-11-20 05:49:07.732527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.890 [2024-11-20 05:49:07.787754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:21.890 Running I/O for 90 seconds... 00:38:21.890 11295.00 IOPS, 44.12 MiB/s [2024-11-20T05:49:41.809Z] 11375.50 IOPS, 44.44 MiB/s [2024-11-20T05:49:41.809Z] 11363.33 IOPS, 44.39 MiB/s [2024-11-20T05:49:41.809Z] 11351.25 IOPS, 44.34 MiB/s [2024-11-20T05:49:41.809Z] 11314.60 IOPS, 44.20 MiB/s [2024-11-20T05:49:41.809Z] 11202.50 IOPS, 43.76 MiB/s [2024-11-20T05:49:41.809Z] 11160.29 IOPS, 43.59 MiB/s [2024-11-20T05:49:41.809Z] 11078.12 IOPS, 43.27 MiB/s [2024-11-20T05:49:41.809Z] 11104.44 IOPS, 43.38 MiB/s [2024-11-20T05:49:41.809Z] 11074.10 IOPS, 43.26 MiB/s [2024-11-20T05:49:41.809Z] 11072.45 IOPS, 43.25 MiB/s [2024-11-20T05:49:41.809Z] 11123.58 IOPS, 43.45 MiB/s [2024-11-20T05:49:41.809Z] 11155.77 IOPS, 43.58 MiB/s [2024-11-20T05:49:41.809Z] [2024-11-20 05:49:23.367806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.890 [2024-11-20 05:49:23.367863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.370370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.370404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.370433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.370457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.370481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.370496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.370520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.370534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.370558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.370571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.370595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.370609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.370633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.370646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.370758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.370776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.370801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.370839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.370863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.370876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.370900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.370913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.370937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.370950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.370975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.370988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.371012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.371025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.371049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.371062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.371086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.371099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.371123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.371136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.371160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.371173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.371197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.371210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:21.890 [2024-11-20 05:49:23.371233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.890 [2024-11-20 05:49:23.371247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:23.371270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:23.371292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:23.371316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:23.371330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:23.371353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:23.371367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:23.371403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:23.371417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:23.371440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:23.371466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:23.371490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:23.371505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:23.371530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:23.371544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:23.371568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:23.371582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:23.371606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:23.371619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:23.371643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:23.371657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:23.371680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:23.371694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:23.371717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:23.371731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:23.371754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:23.371768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:23.371799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:23.371813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:23.371837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:23.371851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:21.891 11097.29 IOPS, 43.35 MiB/s [2024-11-20T05:49:41.810Z] 10357.47 IOPS, 40.46 MiB/s [2024-11-20T05:49:41.810Z] 9710.12 IOPS, 37.93 MiB/s [2024-11-20T05:49:41.810Z] 9138.94 IOPS, 35.70 MiB/s [2024-11-20T05:49:41.810Z] 8700.17 IOPS, 33.99 MiB/s [2024-11-20T05:49:41.810Z] 8860.42 IOPS, 34.61 MiB/s [2024-11-20T05:49:41.810Z] 8973.20 IOPS, 35.05 MiB/s [2024-11-20T05:49:41.810Z] 9262.52 IOPS, 36.18 MiB/s [2024-11-20T05:49:41.810Z] 9545.86 IOPS, 37.29 MiB/s [2024-11-20T05:49:41.810Z] 9809.70 IOPS, 38.32 MiB/s [2024-11-20T05:49:41.810Z] 9888.75 IOPS, 38.63 MiB/s [2024-11-20T05:49:41.810Z] 9965.32 IOPS, 38.93 MiB/s [2024-11-20T05:49:41.810Z] 10036.58 IOPS, 39.21 MiB/s [2024-11-20T05:49:41.810Z] 10183.41 IOPS, 39.78 MiB/s [2024-11-20T05:49:41.810Z] 10356.46 IOPS, 40.45 MiB/s [2024-11-20T05:49:41.810Z] 10525.62 IOPS, 41.12 MiB/s [2024-11-20T05:49:41.810Z] [2024-11-20 05:49:38.829082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.891 [2024-11-20 05:49:38.829144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.891 [2024-11-20 05:49:38.829202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.891 [2024-11-20 05:49:38.829234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.891 [2024-11-20 05:49:38.829266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:38.829298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:38.829330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:38.829362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:38.829394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:38.829459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:38.829492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:38.829523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:38.829556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:38.829588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:38.829620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:38.829651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:38.829683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:38.829714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:38.829745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.891 [2024-11-20 05:49:38.829776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:38:21.891 [2024-11-20 05:49:38.829795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.891 [2024-11-20 05:49:38.829808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.829827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.892 [2024-11-20 05:49:38.829847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.829866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.892 [2024-11-20 05:49:38.829879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.829898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.892 [2024-11-20 05:49:38.829911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.892 [2024-11-20 05:49:38.830492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.892 [2024-11-20 05:49:38.830530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.892 [2024-11-20 05:49:38.830561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.892 [2024-11-20 05:49:38.830593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.830625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.830657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.830689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.830720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.830753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.830785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.830826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.830858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.830891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.830924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.830955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.830974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:126200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.830987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.831005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.831019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.831037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.831050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.831068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:126248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.831082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.831100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.892 [2024-11-20 05:49:38.831113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.831132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.892 [2024-11-20 05:49:38.831145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.831164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.892 [2024-11-20 05:49:38.831177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.831202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.892 [2024-11-20 05:49:38.831215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.831744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.831768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.831790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.831804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.831823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.831836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.831855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.831868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.831887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.831906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.831925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.831938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:21.892 [2024-11-20 05:49:38.831957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:21.892 [2024-11-20 05:49:38.831971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:38:21.892 10603.30 IOPS, 41.42 MiB/s [2024-11-20T05:49:41.811Z] 10635.35 IOPS, 41.54 MiB/s [2024-11-20T05:49:41.811Z] Received shutdown signal, test time was about 31.986366 seconds 00:38:21.892 00:38:21.892 Latency(us) 00:38:21.892 [2024-11-20T05:49:41.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.892 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:38:21.892 Verification LBA range: start 0x0 length 0x4000 00:38:21.892 Nvme0n1 : 31.99 10664.80 41.66 0.00 0.00 11980.30 133.61 4026531.84 00:38:21.892 [2024-11-20T05:49:41.811Z] =================================================================================================================== 00:38:21.892 [2024-11-20T05:49:41.811Z] Total : 10664.80 41.66 0.00 0.00 11980.30 133.61 4026531.84 00:38:21.892 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:22.152 rmmod nvme_tcp 00:38:22.152 rmmod nvme_fabrics 00:38:22.152 rmmod nvme_keyring 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 90136 ']' 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 90136 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 90136 ']' 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 90136 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90136 00:38:22.152 killing process with pid 90136 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90136' 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 90136 00:38:22.152 05:49:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 90136 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:22.411 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:22.670 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:22.670 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:22.670 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:22.670 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:22.670 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.670 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:22.670 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.670 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:38:22.670 00:38:22.670 real 0m37.383s 00:38:22.670 user 1m58.179s 00:38:22.670 sys 0m12.102s 00:38:22.670 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:22.670 05:49:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:22.670 ************************************ 00:38:22.670 END TEST nvmf_host_multipath_status 00:38:22.670 ************************************ 00:38:22.670 05:49:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:38:22.670 05:49:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:38:22.670 05:49:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:22.670 05:49:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.670 ************************************ 00:38:22.670 START TEST nvmf_discovery_remove_ifc 00:38:22.670 ************************************ 00:38:22.670 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:38:22.929 * Looking for test storage... 00:38:22.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:38:22.929 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:22.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.930 --rc genhtml_branch_coverage=1 00:38:22.930 --rc genhtml_function_coverage=1 00:38:22.930 --rc genhtml_legend=1 00:38:22.930 --rc geninfo_all_blocks=1 00:38:22.930 --rc geninfo_unexecuted_blocks=1 00:38:22.930 00:38:22.930 ' 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:22.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.930 --rc genhtml_branch_coverage=1 00:38:22.930 --rc genhtml_function_coverage=1 00:38:22.930 --rc genhtml_legend=1 00:38:22.930 --rc geninfo_all_blocks=1 00:38:22.930 --rc geninfo_unexecuted_blocks=1 00:38:22.930 00:38:22.930 ' 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:22.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.930 --rc genhtml_branch_coverage=1 00:38:22.930 --rc genhtml_function_coverage=1 00:38:22.930 --rc genhtml_legend=1 00:38:22.930 --rc geninfo_all_blocks=1 00:38:22.930 --rc geninfo_unexecuted_blocks=1 00:38:22.930 00:38:22.930 ' 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:22.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.930 --rc genhtml_branch_coverage=1 00:38:22.930 --rc genhtml_function_coverage=1 00:38:22.930 --rc genhtml_legend=1 00:38:22.930 --rc geninfo_all_blocks=1 00:38:22.930 --rc geninfo_unexecuted_blocks=1 00:38:22.930 00:38:22.930 ' 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:22.930 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:38:22.930 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:38:22.931 Cannot find device "nvmf_init_br" 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:38:22.931 Cannot find device "nvmf_init_br2" 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:38:22.931 Cannot find device "nvmf_tgt_br" 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:38:22.931 Cannot find device "nvmf_tgt_br2" 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:38:22.931 Cannot find device "nvmf_init_br" 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:38:22.931 Cannot find device "nvmf_init_br2" 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:38:22.931 Cannot find device "nvmf_tgt_br" 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:38:22.931 Cannot find device "nvmf_tgt_br2" 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:38:22.931 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:38:23.190 Cannot find device "nvmf_br" 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:38:23.190 Cannot find device "nvmf_init_if" 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:38:23.190 Cannot find device "nvmf_init_if2" 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:23.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:23.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:38:23.190 05:49:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:23.190 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:23.190 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:23.190 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:38:23.190 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:38:23.190 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:38:23.190 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:38:23.190 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:23.190 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:23.190 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:23.190 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:38:23.190 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:38:23.191 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:38:23.191 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:23.191 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:38:23.450 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:23.450 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:38:23.450 00:38:23.450 --- 10.0.0.3 ping statistics --- 00:38:23.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.450 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:38:23.450 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:23.450 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:38:23.450 00:38:23.450 --- 10.0.0.4 ping statistics --- 00:38:23.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.450 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:23.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:23.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:38:23.450 00:38:23.450 --- 10.0.0.1 ping statistics --- 00:38:23.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.450 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:38:23.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:23.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:38:23.450 00:38:23.450 --- 10.0.0.2 ping statistics --- 00:38:23.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.450 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=91573 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 91573 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 91573 ']' 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:23.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:23.450 05:49:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:23.450 [2024-11-20 05:49:43.207642] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:38:23.450 [2024-11-20 05:49:43.207873] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:23.450 [2024-11-20 05:49:43.351269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.709 [2024-11-20 05:49:43.395823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:23.709 [2024-11-20 05:49:43.395866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:23.709 [2024-11-20 05:49:43.395875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:23.709 [2024-11-20 05:49:43.395898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:23.709 [2024-11-20 05:49:43.395905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:23.709 [2024-11-20 05:49:43.396166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:24.275 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:24.275 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:38:24.275 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:24.275 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:24.275 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:24.275 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:24.275 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:38:24.275 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.275 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:24.534 [2024-11-20 05:49:44.201987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:24.534 [2024-11-20 05:49:44.210132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:38:24.534 null0 00:38:24.534 [2024-11-20 05:49:44.242055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:38:24.534 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.534 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91626 00:38:24.534 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91626 /tmp/host.sock 00:38:24.534 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 91626 ']' 00:38:24.534 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:38:24.534 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:38:24.534 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:24.534 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:38:24.534 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:38:24.534 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:24.534 05:49:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:24.534 [2024-11-20 05:49:44.325710] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:38:24.534 [2024-11-20 05:49:44.326011] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91626 ] 00:38:24.793 [2024-11-20 05:49:44.480711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.793 [2024-11-20 05:49:44.534265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:25.360 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:25.361 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:38:25.361 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:25.361 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:38:25.361 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.361 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:25.361 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.361 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:38:25.361 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.361 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:25.620 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.620 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:38:25.620 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.620 05:49:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:26.555 [2024-11-20 05:49:46.344829] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:38:26.555 [2024-11-20 05:49:46.344853] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:38:26.555 [2024-11-20 05:49:46.344867] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:38:26.555 [2024-11-20 05:49:46.430928] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:38:26.814 [2024-11-20 05:49:46.485255] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:38:26.814 [2024-11-20 05:49:46.485957] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14111b0:1 started. 00:38:26.815 [2024-11-20 05:49:46.487569] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:38:26.815 [2024-11-20 05:49:46.487621] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:38:26.815 [2024-11-20 05:49:46.487642] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:38:26.815 [2024-11-20 05:49:46.487657] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:38:26.815 [2024-11-20 05:49:46.487679] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:26.815 [2024-11-20 05:49:46.493472] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14111b0 was disconnected and freed. delete nvme_qpair. 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:26.815 05:49:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:27.750 05:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:27.750 05:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:27.751 05:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.751 05:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:27.751 05:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:27.751 05:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:27.751 05:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:27.751 05:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.751 05:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:27.751 05:49:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:29.127 05:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:29.127 05:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:29.127 05:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:29.127 05:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.127 05:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:29.127 05:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:29.127 05:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:29.127 05:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.127 05:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:29.127 05:49:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:30.076 05:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:30.076 05:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:30.076 05:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:30.076 05:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.076 05:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:30.076 05:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:30.076 05:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:30.076 05:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.076 05:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:30.076 05:49:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:31.011 05:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:31.011 05:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:31.011 05:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.011 05:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:31.011 05:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:31.011 05:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:31.011 05:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:31.011 05:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.011 05:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:31.011 05:49:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:31.948 05:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:31.948 05:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:31.948 05:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:31.948 05:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.948 05:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:31.948 05:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:31.948 05:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:31.948 05:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.208 05:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:32.208 05:49:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:32.208 [2024-11-20 05:49:51.915743] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:38:32.208 [2024-11-20 05:49:51.915934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:32.208 [2024-11-20 05:49:51.915951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:32.208 [2024-11-20 05:49:51.915964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:32.208 [2024-11-20 05:49:51.915973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:32.208 [2024-11-20 05:49:51.915983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:32.208 [2024-11-20 05:49:51.915992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:32.208 [2024-11-20 05:49:51.916002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:32.208 [2024-11-20 05:49:51.916012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:32.208 [2024-11-20 05:49:51.916021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:38:32.208 [2024-11-20 05:49:51.916031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:32.208 [2024-11-20 05:49:51.916040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ee520 is same with the state(6) to be set 00:38:32.208 [2024-11-20 05:49:51.925740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ee520 (9): Bad file descriptor 00:38:32.208 [2024-11-20 05:49:51.935753] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:38:32.208 [2024-11-20 05:49:51.935767] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:38:32.208 [2024-11-20 05:49:51.935773] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:38:32.208 [2024-11-20 05:49:51.935779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:38:32.208 [2024-11-20 05:49:51.935813] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:38:33.145 05:49:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:33.145 05:49:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:33.145 05:49:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:33.145 05:49:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:33.145 05:49:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.145 05:49:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:33.145 05:49:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:33.145 [2024-11-20 05:49:52.991503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:38:33.145 [2024-11-20 05:49:52.991573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ee520 with addr=10.0.0.3, port=4420 00:38:33.145 [2024-11-20 05:49:52.991595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ee520 is same with the state(6) to be set 00:38:33.145 [2024-11-20 05:49:52.991637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ee520 (9): Bad file descriptor 00:38:33.145 [2024-11-20 05:49:52.992207] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:38:33.145 [2024-11-20 05:49:52.992258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:38:33.145 [2024-11-20 05:49:52.992279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:38:33.145 [2024-11-20 05:49:52.992302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:38:33.145 [2024-11-20 05:49:52.992321] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:38:33.145 [2024-11-20 05:49:52.992335] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:38:33.145 [2024-11-20 05:49:52.992347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:38:33.145 [2024-11-20 05:49:52.992368] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:38:33.145 [2024-11-20 05:49:52.992380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:38:33.145 05:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:33.145 05:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:33.145 05:49:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:34.082 [2024-11-20 05:49:53.992470] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:38:34.082 [2024-11-20 05:49:53.992513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:38:34.082 [2024-11-20 05:49:53.992546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:38:34.082 [2024-11-20 05:49:53.992560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:38:34.082 [2024-11-20 05:49:53.992573] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:38:34.082 [2024-11-20 05:49:53.992582] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:38:34.082 [2024-11-20 05:49:53.992598] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:38:34.082 [2024-11-20 05:49:53.992606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:38:34.082 [2024-11-20 05:49:53.992682] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:38:34.082 [2024-11-20 05:49:53.992737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:34.082 [2024-11-20 05:49:53.992751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.082 [2024-11-20 05:49:53.992766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:34.082 [2024-11-20 05:49:53.992776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.082 [2024-11-20 05:49:53.992785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:34.082 [2024-11-20 05:49:53.992794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.082 [2024-11-20 05:49:53.992803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:34.082 [2024-11-20 05:49:53.992812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.082 [2024-11-20 05:49:53.992821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:38:34.082 [2024-11-20 05:49:53.992830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:34.082 [2024-11-20 05:49:53.992839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:38:34.082 [2024-11-20 05:49:53.993421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x137b2d0 (9): Bad file descriptor 00:38:34.082 [2024-11-20 05:49:53.994437] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:38:34.082 [2024-11-20 05:49:53.994465] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:38:34.341 05:49:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:35.278 05:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:35.278 05:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:35.278 05:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:35.278 05:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:35.278 05:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:35.278 05:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:35.278 05:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:35.278 05:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:35.536 05:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:38:35.536 05:49:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:36.103 [2024-11-20 05:49:56.006491] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:38:36.103 [2024-11-20 05:49:56.006513] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:38:36.103 [2024-11-20 05:49:56.006526] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:38:36.363 [2024-11-20 05:49:56.092578] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:38:36.363 [2024-11-20 05:49:56.146872] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:38:36.363 [2024-11-20 05:49:56.147514] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x13e7fa0:1 started. 00:38:36.363 [2024-11-20 05:49:56.148591] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:38:36.363 [2024-11-20 05:49:56.148654] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:38:36.363 [2024-11-20 05:49:56.148679] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:38:36.363 [2024-11-20 05:49:56.148706] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:38:36.363 [2024-11-20 05:49:56.148716] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:38:36.363 [2024-11-20 05:49:56.155073] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x13e7fa0 was disconnected and freed. delete nvme_qpair. 00:38:36.363 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:36.363 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:36.363 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:36.363 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.363 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:36.363 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:36.363 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:36.363 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.363 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:38:36.363 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:38:36.363 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91626 00:38:36.363 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 91626 ']' 00:38:36.363 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 91626 00:38:36.363 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:38:36.363 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:36.622 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91626 00:38:36.622 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:36.622 killing process with pid 91626 00:38:36.622 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:36.622 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91626' 00:38:36.622 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 91626 00:38:36.622 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 91626 00:38:36.622 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:38:36.622 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:36.622 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:38:36.622 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:36.622 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:38:36.622 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:36.622 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:36.622 rmmod nvme_tcp 00:38:36.622 rmmod nvme_fabrics 00:38:36.881 rmmod nvme_keyring 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 91573 ']' 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 91573 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 91573 ']' 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 91573 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91573 00:38:36.881 killing process with pid 91573 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91573' 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 91573 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 91573 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:36.881 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:37.139 05:49:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:37.139 05:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:38:37.139 00:38:37.139 real 0m14.520s 00:38:37.139 user 0m24.787s 00:38:37.139 sys 0m2.439s 00:38:37.139 05:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:37.139 05:49:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:37.139 ************************************ 00:38:37.139 END TEST nvmf_discovery_remove_ifc 00:38:37.139 ************************************ 00:38:37.399 05:49:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:37.400 ************************************ 00:38:37.400 START TEST nvmf_identify_kernel_target 00:38:37.400 ************************************ 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:38:37.400 * Looking for test storage... 00:38:37.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:37.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.400 --rc genhtml_branch_coverage=1 00:38:37.400 --rc genhtml_function_coverage=1 00:38:37.400 --rc genhtml_legend=1 00:38:37.400 --rc geninfo_all_blocks=1 00:38:37.400 --rc geninfo_unexecuted_blocks=1 00:38:37.400 00:38:37.400 ' 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:37.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.400 --rc genhtml_branch_coverage=1 00:38:37.400 --rc genhtml_function_coverage=1 00:38:37.400 --rc genhtml_legend=1 00:38:37.400 --rc geninfo_all_blocks=1 00:38:37.400 --rc geninfo_unexecuted_blocks=1 00:38:37.400 00:38:37.400 ' 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:37.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.400 --rc genhtml_branch_coverage=1 00:38:37.400 --rc genhtml_function_coverage=1 00:38:37.400 --rc genhtml_legend=1 00:38:37.400 --rc geninfo_all_blocks=1 00:38:37.400 --rc geninfo_unexecuted_blocks=1 00:38:37.400 00:38:37.400 ' 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:37.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.400 --rc genhtml_branch_coverage=1 00:38:37.400 --rc genhtml_function_coverage=1 00:38:37.400 --rc genhtml_legend=1 00:38:37.400 --rc geninfo_all_blocks=1 00:38:37.400 --rc geninfo_unexecuted_blocks=1 00:38:37.400 00:38:37.400 ' 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:38:37.400 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:37.401 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:37.401 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:38:37.660 Cannot find device "nvmf_init_br" 00:38:37.660 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:38:37.660 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:38:37.660 Cannot find device "nvmf_init_br2" 00:38:37.660 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:38:37.661 Cannot find device "nvmf_tgt_br" 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:38:37.661 Cannot find device "nvmf_tgt_br2" 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:38:37.661 Cannot find device "nvmf_init_br" 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:38:37.661 Cannot find device "nvmf_init_br2" 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:38:37.661 Cannot find device "nvmf_tgt_br" 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:38:37.661 Cannot find device "nvmf_tgt_br2" 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:38:37.661 Cannot find device "nvmf_br" 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:38:37.661 Cannot find device "nvmf_init_if" 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:38:37.661 Cannot find device "nvmf_init_if2" 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:37.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:37.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:37.661 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:37.920 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:38:37.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:37.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:38:37.920 00:38:37.920 --- 10.0.0.3 ping statistics --- 00:38:37.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:37.920 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:38:37.921 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:37.921 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:38:37.921 00:38:37.921 --- 10.0.0.4 ping statistics --- 00:38:37.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:37.921 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:37.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:37.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:38:37.921 00:38:37.921 --- 10.0.0.1 ping statistics --- 00:38:37.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:37.921 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:38:37.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:37.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:38:37.921 00:38:37.921 --- 10.0.0.2 ping statistics --- 00:38:37.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:37.921 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:37.921 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:38.180 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:38.180 05:49:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:38.439 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:38.439 Waiting for block devices as requested 00:38:38.439 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:38.699 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:38.699 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:38.699 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:38.699 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:38.699 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:38:38.699 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:38.699 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:38.699 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:38.699 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:38.699 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:38:38.958 No valid GPT data, bailing 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:38:38.958 No valid GPT data, bailing 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:38:38.958 No valid GPT data, bailing 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:38:38.958 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:38:38.959 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:38:38.959 No valid GPT data, bailing 00:38:38.959 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:39.218 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -a 10.0.0.1 -t tcp -s 4420 00:38:39.218 00:38:39.218 Discovery Log Number of Records 2, Generation counter 2 00:38:39.218 =====Discovery Log Entry 0====== 00:38:39.218 trtype: tcp 00:38:39.218 adrfam: ipv4 00:38:39.218 subtype: current discovery subsystem 00:38:39.218 treq: not specified, sq flow control disable supported 00:38:39.218 portid: 1 00:38:39.218 trsvcid: 4420 00:38:39.218 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:39.218 traddr: 10.0.0.1 00:38:39.218 eflags: none 00:38:39.218 sectype: none 00:38:39.218 =====Discovery Log Entry 1====== 00:38:39.218 trtype: tcp 00:38:39.218 adrfam: ipv4 00:38:39.218 subtype: nvme subsystem 00:38:39.218 treq: not specified, sq flow control disable supported 00:38:39.218 portid: 1 00:38:39.218 trsvcid: 4420 00:38:39.218 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:39.218 traddr: 10.0.0.1 00:38:39.218 eflags: none 00:38:39.219 sectype: none 00:38:39.219 05:49:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:38:39.219 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:38:39.489 ===================================================== 00:38:39.489 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:38:39.489 ===================================================== 00:38:39.489 Controller Capabilities/Features 00:38:39.489 ================================ 00:38:39.489 Vendor ID: 0000 00:38:39.489 Subsystem Vendor ID: 0000 00:38:39.489 Serial Number: 4134204b22de48de88af 00:38:39.489 Model Number: Linux 00:38:39.489 Firmware Version: 6.8.9-20 00:38:39.489 Recommended Arb Burst: 0 00:38:39.489 IEEE OUI Identifier: 00 00 00 00:38:39.489 Multi-path I/O 00:38:39.489 May have multiple subsystem ports: No 00:38:39.489 May have multiple controllers: No 00:38:39.489 Associated with SR-IOV VF: No 00:38:39.489 Max Data Transfer Size: Unlimited 00:38:39.489 Max Number of Namespaces: 0 00:38:39.489 Max Number of I/O Queues: 1024 00:38:39.489 NVMe Specification Version (VS): 1.3 00:38:39.489 NVMe Specification Version (Identify): 1.3 00:38:39.489 Maximum Queue Entries: 1024 00:38:39.489 Contiguous Queues Required: No 00:38:39.489 Arbitration Mechanisms Supported 00:38:39.489 Weighted Round Robin: Not Supported 00:38:39.489 Vendor Specific: Not Supported 00:38:39.489 Reset Timeout: 7500 ms 00:38:39.489 Doorbell Stride: 4 bytes 00:38:39.489 NVM Subsystem Reset: Not Supported 00:38:39.489 Command Sets Supported 00:38:39.489 NVM Command Set: Supported 00:38:39.489 Boot Partition: Not Supported 00:38:39.489 Memory Page Size Minimum: 4096 bytes 00:38:39.489 Memory Page Size Maximum: 4096 bytes 00:38:39.489 Persistent Memory Region: Not Supported 00:38:39.489 Optional Asynchronous Events Supported 00:38:39.489 Namespace Attribute Notices: Not Supported 00:38:39.489 Firmware Activation Notices: Not Supported 00:38:39.489 ANA Change Notices: Not Supported 00:38:39.489 PLE Aggregate Log Change Notices: Not Supported 00:38:39.489 LBA Status Info Alert Notices: Not Supported 00:38:39.489 EGE Aggregate Log Change Notices: Not Supported 00:38:39.489 Normal NVM Subsystem Shutdown event: Not Supported 00:38:39.489 Zone Descriptor Change Notices: Not Supported 00:38:39.489 Discovery Log Change Notices: Supported 00:38:39.489 Controller Attributes 00:38:39.489 128-bit Host Identifier: Not Supported 00:38:39.489 Non-Operational Permissive Mode: Not Supported 00:38:39.489 NVM Sets: Not Supported 00:38:39.489 Read Recovery Levels: Not Supported 00:38:39.489 Endurance Groups: Not Supported 00:38:39.489 Predictable Latency Mode: Not Supported 00:38:39.489 Traffic Based Keep ALive: Not Supported 00:38:39.489 Namespace Granularity: Not Supported 00:38:39.489 SQ Associations: Not Supported 00:38:39.490 UUID List: Not Supported 00:38:39.490 Multi-Domain Subsystem: Not Supported 00:38:39.490 Fixed Capacity Management: Not Supported 00:38:39.490 Variable Capacity Management: Not Supported 00:38:39.490 Delete Endurance Group: Not Supported 00:38:39.490 Delete NVM Set: Not Supported 00:38:39.490 Extended LBA Formats Supported: Not Supported 00:38:39.490 Flexible Data Placement Supported: Not Supported 00:38:39.490 00:38:39.490 Controller Memory Buffer Support 00:38:39.490 ================================ 00:38:39.490 Supported: No 00:38:39.490 00:38:39.490 Persistent Memory Region Support 00:38:39.490 ================================ 00:38:39.490 Supported: No 00:38:39.490 00:38:39.490 Admin Command Set Attributes 00:38:39.490 ============================ 00:38:39.490 Security Send/Receive: Not Supported 00:38:39.490 Format NVM: Not Supported 00:38:39.490 Firmware Activate/Download: Not Supported 00:38:39.490 Namespace Management: Not Supported 00:38:39.490 Device Self-Test: Not Supported 00:38:39.490 Directives: Not Supported 00:38:39.490 NVMe-MI: Not Supported 00:38:39.490 Virtualization Management: Not Supported 00:38:39.490 Doorbell Buffer Config: Not Supported 00:38:39.490 Get LBA Status Capability: Not Supported 00:38:39.490 Command & Feature Lockdown Capability: Not Supported 00:38:39.490 Abort Command Limit: 1 00:38:39.490 Async Event Request Limit: 1 00:38:39.490 Number of Firmware Slots: N/A 00:38:39.490 Firmware Slot 1 Read-Only: N/A 00:38:39.490 Firmware Activation Without Reset: N/A 00:38:39.490 Multiple Update Detection Support: N/A 00:38:39.490 Firmware Update Granularity: No Information Provided 00:38:39.490 Per-Namespace SMART Log: No 00:38:39.490 Asymmetric Namespace Access Log Page: Not Supported 00:38:39.490 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:38:39.490 Command Effects Log Page: Not Supported 00:38:39.490 Get Log Page Extended Data: Supported 00:38:39.490 Telemetry Log Pages: Not Supported 00:38:39.490 Persistent Event Log Pages: Not Supported 00:38:39.490 Supported Log Pages Log Page: May Support 00:38:39.490 Commands Supported & Effects Log Page: Not Supported 00:38:39.490 Feature Identifiers & Effects Log Page:May Support 00:38:39.490 NVMe-MI Commands & Effects Log Page: May Support 00:38:39.490 Data Area 4 for Telemetry Log: Not Supported 00:38:39.490 Error Log Page Entries Supported: 1 00:38:39.490 Keep Alive: Not Supported 00:38:39.490 00:38:39.490 NVM Command Set Attributes 00:38:39.490 ========================== 00:38:39.490 Submission Queue Entry Size 00:38:39.490 Max: 1 00:38:39.490 Min: 1 00:38:39.490 Completion Queue Entry Size 00:38:39.490 Max: 1 00:38:39.490 Min: 1 00:38:39.490 Number of Namespaces: 0 00:38:39.490 Compare Command: Not Supported 00:38:39.490 Write Uncorrectable Command: Not Supported 00:38:39.490 Dataset Management Command: Not Supported 00:38:39.490 Write Zeroes Command: Not Supported 00:38:39.490 Set Features Save Field: Not Supported 00:38:39.490 Reservations: Not Supported 00:38:39.490 Timestamp: Not Supported 00:38:39.490 Copy: Not Supported 00:38:39.490 Volatile Write Cache: Not Present 00:38:39.490 Atomic Write Unit (Normal): 1 00:38:39.490 Atomic Write Unit (PFail): 1 00:38:39.490 Atomic Compare & Write Unit: 1 00:38:39.490 Fused Compare & Write: Not Supported 00:38:39.490 Scatter-Gather List 00:38:39.490 SGL Command Set: Supported 00:38:39.490 SGL Keyed: Not Supported 00:38:39.490 SGL Bit Bucket Descriptor: Not Supported 00:38:39.490 SGL Metadata Pointer: Not Supported 00:38:39.490 Oversized SGL: Not Supported 00:38:39.490 SGL Metadata Address: Not Supported 00:38:39.490 SGL Offset: Supported 00:38:39.490 Transport SGL Data Block: Not Supported 00:38:39.490 Replay Protected Memory Block: Not Supported 00:38:39.490 00:38:39.490 Firmware Slot Information 00:38:39.490 ========================= 00:38:39.490 Active slot: 0 00:38:39.490 00:38:39.490 00:38:39.490 Error Log 00:38:39.490 ========= 00:38:39.490 00:38:39.490 Active Namespaces 00:38:39.490 ================= 00:38:39.490 Discovery Log Page 00:38:39.490 ================== 00:38:39.490 Generation Counter: 2 00:38:39.490 Number of Records: 2 00:38:39.490 Record Format: 0 00:38:39.490 00:38:39.490 Discovery Log Entry 0 00:38:39.490 ---------------------- 00:38:39.490 Transport Type: 3 (TCP) 00:38:39.490 Address Family: 1 (IPv4) 00:38:39.490 Subsystem Type: 3 (Current Discovery Subsystem) 00:38:39.490 Entry Flags: 00:38:39.490 Duplicate Returned Information: 0 00:38:39.490 Explicit Persistent Connection Support for Discovery: 0 00:38:39.490 Transport Requirements: 00:38:39.490 Secure Channel: Not Specified 00:38:39.490 Port ID: 1 (0x0001) 00:38:39.490 Controller ID: 65535 (0xffff) 00:38:39.490 Admin Max SQ Size: 32 00:38:39.490 Transport Service Identifier: 4420 00:38:39.490 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:38:39.490 Transport Address: 10.0.0.1 00:38:39.490 Discovery Log Entry 1 00:38:39.490 ---------------------- 00:38:39.490 Transport Type: 3 (TCP) 00:38:39.490 Address Family: 1 (IPv4) 00:38:39.490 Subsystem Type: 2 (NVM Subsystem) 00:38:39.490 Entry Flags: 00:38:39.490 Duplicate Returned Information: 0 00:38:39.490 Explicit Persistent Connection Support for Discovery: 0 00:38:39.490 Transport Requirements: 00:38:39.490 Secure Channel: Not Specified 00:38:39.490 Port ID: 1 (0x0001) 00:38:39.490 Controller ID: 65535 (0xffff) 00:38:39.490 Admin Max SQ Size: 32 00:38:39.490 Transport Service Identifier: 4420 00:38:39.490 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:38:39.490 Transport Address: 10.0.0.1 00:38:39.490 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:39.490 get_feature(0x01) failed 00:38:39.490 get_feature(0x02) failed 00:38:39.490 get_feature(0x04) failed 00:38:39.490 ===================================================== 00:38:39.490 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:39.490 ===================================================== 00:38:39.490 Controller Capabilities/Features 00:38:39.490 ================================ 00:38:39.490 Vendor ID: 0000 00:38:39.490 Subsystem Vendor ID: 0000 00:38:39.490 Serial Number: fc2d9ca0cc650783aa6a 00:38:39.490 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:38:39.490 Firmware Version: 6.8.9-20 00:38:39.490 Recommended Arb Burst: 6 00:38:39.490 IEEE OUI Identifier: 00 00 00 00:38:39.490 Multi-path I/O 00:38:39.490 May have multiple subsystem ports: Yes 00:38:39.490 May have multiple controllers: Yes 00:38:39.490 Associated with SR-IOV VF: No 00:38:39.490 Max Data Transfer Size: Unlimited 00:38:39.490 Max Number of Namespaces: 1024 00:38:39.490 Max Number of I/O Queues: 128 00:38:39.490 NVMe Specification Version (VS): 1.3 00:38:39.490 NVMe Specification Version (Identify): 1.3 00:38:39.490 Maximum Queue Entries: 1024 00:38:39.490 Contiguous Queues Required: No 00:38:39.490 Arbitration Mechanisms Supported 00:38:39.490 Weighted Round Robin: Not Supported 00:38:39.490 Vendor Specific: Not Supported 00:38:39.490 Reset Timeout: 7500 ms 00:38:39.490 Doorbell Stride: 4 bytes 00:38:39.490 NVM Subsystem Reset: Not Supported 00:38:39.490 Command Sets Supported 00:38:39.490 NVM Command Set: Supported 00:38:39.490 Boot Partition: Not Supported 00:38:39.490 Memory Page Size Minimum: 4096 bytes 00:38:39.490 Memory Page Size Maximum: 4096 bytes 00:38:39.490 Persistent Memory Region: Not Supported 00:38:39.490 Optional Asynchronous Events Supported 00:38:39.490 Namespace Attribute Notices: Supported 00:38:39.490 Firmware Activation Notices: Not Supported 00:38:39.490 ANA Change Notices: Supported 00:38:39.490 PLE Aggregate Log Change Notices: Not Supported 00:38:39.490 LBA Status Info Alert Notices: Not Supported 00:38:39.490 EGE Aggregate Log Change Notices: Not Supported 00:38:39.490 Normal NVM Subsystem Shutdown event: Not Supported 00:38:39.490 Zone Descriptor Change Notices: Not Supported 00:38:39.490 Discovery Log Change Notices: Not Supported 00:38:39.490 Controller Attributes 00:38:39.490 128-bit Host Identifier: Supported 00:38:39.490 Non-Operational Permissive Mode: Not Supported 00:38:39.490 NVM Sets: Not Supported 00:38:39.490 Read Recovery Levels: Not Supported 00:38:39.490 Endurance Groups: Not Supported 00:38:39.490 Predictable Latency Mode: Not Supported 00:38:39.490 Traffic Based Keep ALive: Supported 00:38:39.490 Namespace Granularity: Not Supported 00:38:39.490 SQ Associations: Not Supported 00:38:39.490 UUID List: Not Supported 00:38:39.490 Multi-Domain Subsystem: Not Supported 00:38:39.490 Fixed Capacity Management: Not Supported 00:38:39.490 Variable Capacity Management: Not Supported 00:38:39.490 Delete Endurance Group: Not Supported 00:38:39.491 Delete NVM Set: Not Supported 00:38:39.491 Extended LBA Formats Supported: Not Supported 00:38:39.491 Flexible Data Placement Supported: Not Supported 00:38:39.491 00:38:39.491 Controller Memory Buffer Support 00:38:39.491 ================================ 00:38:39.491 Supported: No 00:38:39.491 00:38:39.491 Persistent Memory Region Support 00:38:39.491 ================================ 00:38:39.491 Supported: No 00:38:39.491 00:38:39.491 Admin Command Set Attributes 00:38:39.491 ============================ 00:38:39.491 Security Send/Receive: Not Supported 00:38:39.491 Format NVM: Not Supported 00:38:39.491 Firmware Activate/Download: Not Supported 00:38:39.491 Namespace Management: Not Supported 00:38:39.491 Device Self-Test: Not Supported 00:38:39.491 Directives: Not Supported 00:38:39.491 NVMe-MI: Not Supported 00:38:39.491 Virtualization Management: Not Supported 00:38:39.491 Doorbell Buffer Config: Not Supported 00:38:39.491 Get LBA Status Capability: Not Supported 00:38:39.491 Command & Feature Lockdown Capability: Not Supported 00:38:39.491 Abort Command Limit: 4 00:38:39.491 Async Event Request Limit: 4 00:38:39.491 Number of Firmware Slots: N/A 00:38:39.491 Firmware Slot 1 Read-Only: N/A 00:38:39.491 Firmware Activation Without Reset: N/A 00:38:39.491 Multiple Update Detection Support: N/A 00:38:39.491 Firmware Update Granularity: No Information Provided 00:38:39.491 Per-Namespace SMART Log: Yes 00:38:39.491 Asymmetric Namespace Access Log Page: Supported 00:38:39.491 ANA Transition Time : 10 sec 00:38:39.491 00:38:39.491 Asymmetric Namespace Access Capabilities 00:38:39.491 ANA Optimized State : Supported 00:38:39.491 ANA Non-Optimized State : Supported 00:38:39.491 ANA Inaccessible State : Supported 00:38:39.491 ANA Persistent Loss State : Supported 00:38:39.491 ANA Change State : Supported 00:38:39.491 ANAGRPID is not changed : No 00:38:39.491 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:38:39.491 00:38:39.491 ANA Group Identifier Maximum : 128 00:38:39.491 Number of ANA Group Identifiers : 128 00:38:39.491 Max Number of Allowed Namespaces : 1024 00:38:39.491 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:38:39.491 Command Effects Log Page: Supported 00:38:39.491 Get Log Page Extended Data: Supported 00:38:39.491 Telemetry Log Pages: Not Supported 00:38:39.491 Persistent Event Log Pages: Not Supported 00:38:39.491 Supported Log Pages Log Page: May Support 00:38:39.491 Commands Supported & Effects Log Page: Not Supported 00:38:39.491 Feature Identifiers & Effects Log Page:May Support 00:38:39.491 NVMe-MI Commands & Effects Log Page: May Support 00:38:39.491 Data Area 4 for Telemetry Log: Not Supported 00:38:39.491 Error Log Page Entries Supported: 128 00:38:39.491 Keep Alive: Supported 00:38:39.491 Keep Alive Granularity: 1000 ms 00:38:39.491 00:38:39.491 NVM Command Set Attributes 00:38:39.491 ========================== 00:38:39.491 Submission Queue Entry Size 00:38:39.491 Max: 64 00:38:39.491 Min: 64 00:38:39.491 Completion Queue Entry Size 00:38:39.491 Max: 16 00:38:39.491 Min: 16 00:38:39.491 Number of Namespaces: 1024 00:38:39.491 Compare Command: Not Supported 00:38:39.491 Write Uncorrectable Command: Not Supported 00:38:39.491 Dataset Management Command: Supported 00:38:39.491 Write Zeroes Command: Supported 00:38:39.491 Set Features Save Field: Not Supported 00:38:39.491 Reservations: Not Supported 00:38:39.491 Timestamp: Not Supported 00:38:39.491 Copy: Not Supported 00:38:39.491 Volatile Write Cache: Present 00:38:39.491 Atomic Write Unit (Normal): 1 00:38:39.491 Atomic Write Unit (PFail): 1 00:38:39.491 Atomic Compare & Write Unit: 1 00:38:39.491 Fused Compare & Write: Not Supported 00:38:39.491 Scatter-Gather List 00:38:39.491 SGL Command Set: Supported 00:38:39.491 SGL Keyed: Not Supported 00:38:39.491 SGL Bit Bucket Descriptor: Not Supported 00:38:39.491 SGL Metadata Pointer: Not Supported 00:38:39.491 Oversized SGL: Not Supported 00:38:39.491 SGL Metadata Address: Not Supported 00:38:39.491 SGL Offset: Supported 00:38:39.491 Transport SGL Data Block: Not Supported 00:38:39.491 Replay Protected Memory Block: Not Supported 00:38:39.491 00:38:39.491 Firmware Slot Information 00:38:39.491 ========================= 00:38:39.491 Active slot: 0 00:38:39.491 00:38:39.491 Asymmetric Namespace Access 00:38:39.491 =========================== 00:38:39.491 Change Count : 0 00:38:39.491 Number of ANA Group Descriptors : 1 00:38:39.491 ANA Group Descriptor : 0 00:38:39.491 ANA Group ID : 1 00:38:39.491 Number of NSID Values : 1 00:38:39.491 Change Count : 0 00:38:39.491 ANA State : 1 00:38:39.491 Namespace Identifier : 1 00:38:39.491 00:38:39.491 Commands Supported and Effects 00:38:39.491 ============================== 00:38:39.491 Admin Commands 00:38:39.491 -------------- 00:38:39.491 Get Log Page (02h): Supported 00:38:39.491 Identify (06h): Supported 00:38:39.491 Abort (08h): Supported 00:38:39.491 Set Features (09h): Supported 00:38:39.491 Get Features (0Ah): Supported 00:38:39.491 Asynchronous Event Request (0Ch): Supported 00:38:39.491 Keep Alive (18h): Supported 00:38:39.491 I/O Commands 00:38:39.491 ------------ 00:38:39.491 Flush (00h): Supported 00:38:39.491 Write (01h): Supported LBA-Change 00:38:39.491 Read (02h): Supported 00:38:39.491 Write Zeroes (08h): Supported LBA-Change 00:38:39.491 Dataset Management (09h): Supported 00:38:39.491 00:38:39.491 Error Log 00:38:39.491 ========= 00:38:39.491 Entry: 0 00:38:39.491 Error Count: 0x3 00:38:39.491 Submission Queue Id: 0x0 00:38:39.491 Command Id: 0x5 00:38:39.491 Phase Bit: 0 00:38:39.491 Status Code: 0x2 00:38:39.491 Status Code Type: 0x0 00:38:39.491 Do Not Retry: 1 00:38:39.491 Error Location: 0x28 00:38:39.491 LBA: 0x0 00:38:39.491 Namespace: 0x0 00:38:39.491 Vendor Log Page: 0x0 00:38:39.491 ----------- 00:38:39.491 Entry: 1 00:38:39.491 Error Count: 0x2 00:38:39.491 Submission Queue Id: 0x0 00:38:39.491 Command Id: 0x5 00:38:39.491 Phase Bit: 0 00:38:39.491 Status Code: 0x2 00:38:39.491 Status Code Type: 0x0 00:38:39.491 Do Not Retry: 1 00:38:39.491 Error Location: 0x28 00:38:39.491 LBA: 0x0 00:38:39.491 Namespace: 0x0 00:38:39.491 Vendor Log Page: 0x0 00:38:39.491 ----------- 00:38:39.491 Entry: 2 00:38:39.491 Error Count: 0x1 00:38:39.491 Submission Queue Id: 0x0 00:38:39.491 Command Id: 0x4 00:38:39.491 Phase Bit: 0 00:38:39.491 Status Code: 0x2 00:38:39.491 Status Code Type: 0x0 00:38:39.491 Do Not Retry: 1 00:38:39.491 Error Location: 0x28 00:38:39.491 LBA: 0x0 00:38:39.491 Namespace: 0x0 00:38:39.491 Vendor Log Page: 0x0 00:38:39.491 00:38:39.491 Number of Queues 00:38:39.491 ================ 00:38:39.491 Number of I/O Submission Queues: 128 00:38:39.491 Number of I/O Completion Queues: 128 00:38:39.491 00:38:39.491 ZNS Specific Controller Data 00:38:39.491 ============================ 00:38:39.491 Zone Append Size Limit: 0 00:38:39.491 00:38:39.491 00:38:39.491 Active Namespaces 00:38:39.491 ================= 00:38:39.491 get_feature(0x05) failed 00:38:39.491 Namespace ID:1 00:38:39.491 Command Set Identifier: NVM (00h) 00:38:39.491 Deallocate: Supported 00:38:39.491 Deallocated/Unwritten Error: Not Supported 00:38:39.491 Deallocated Read Value: Unknown 00:38:39.491 Deallocate in Write Zeroes: Not Supported 00:38:39.491 Deallocated Guard Field: 0xFFFF 00:38:39.491 Flush: Supported 00:38:39.491 Reservation: Not Supported 00:38:39.491 Namespace Sharing Capabilities: Multiple Controllers 00:38:39.491 Size (in LBAs): 1310720 (5GiB) 00:38:39.491 Capacity (in LBAs): 1310720 (5GiB) 00:38:39.491 Utilization (in LBAs): 1310720 (5GiB) 00:38:39.491 UUID: 2d22c83c-e7b7-4a46-8d08-aad396230733 00:38:39.491 Thin Provisioning: Not Supported 00:38:39.491 Per-NS Atomic Units: Yes 00:38:39.491 Atomic Boundary Size (Normal): 0 00:38:39.491 Atomic Boundary Size (PFail): 0 00:38:39.491 Atomic Boundary Offset: 0 00:38:39.491 NGUID/EUI64 Never Reused: No 00:38:39.491 ANA group ID: 1 00:38:39.491 Namespace Write Protected: No 00:38:39.491 Number of LBA Formats: 1 00:38:39.491 Current LBA Format: LBA Format #00 00:38:39.491 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:38:39.491 00:38:39.491 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:38:39.491 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:39.491 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:39.751 rmmod nvme_tcp 00:38:39.751 rmmod nvme_fabrics 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:38:39.751 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:40.012 05:49:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:40.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:40.960 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:38:40.960 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:38:40.960 ************************************ 00:38:40.960 END TEST nvmf_identify_kernel_target 00:38:40.960 ************************************ 00:38:40.960 00:38:40.960 real 0m3.715s 00:38:40.960 user 0m1.242s 00:38:40.960 sys 0m1.845s 00:38:40.960 05:50:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:40.960 05:50:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:38:40.960 05:50:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:38:40.960 05:50:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:38:40.960 05:50:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:40.960 05:50:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:40.960 ************************************ 00:38:40.960 START TEST nvmf_auth_host 00:38:40.960 ************************************ 00:38:40.961 05:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:38:41.220 * Looking for test storage... 00:38:41.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:38:41.220 05:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:41.221 05:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:38:41.221 05:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:41.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.221 --rc genhtml_branch_coverage=1 00:38:41.221 --rc genhtml_function_coverage=1 00:38:41.221 --rc genhtml_legend=1 00:38:41.221 --rc geninfo_all_blocks=1 00:38:41.221 --rc geninfo_unexecuted_blocks=1 00:38:41.221 00:38:41.221 ' 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:41.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.221 --rc genhtml_branch_coverage=1 00:38:41.221 --rc genhtml_function_coverage=1 00:38:41.221 --rc genhtml_legend=1 00:38:41.221 --rc geninfo_all_blocks=1 00:38:41.221 --rc geninfo_unexecuted_blocks=1 00:38:41.221 00:38:41.221 ' 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:41.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.221 --rc genhtml_branch_coverage=1 00:38:41.221 --rc genhtml_function_coverage=1 00:38:41.221 --rc genhtml_legend=1 00:38:41.221 --rc geninfo_all_blocks=1 00:38:41.221 --rc geninfo_unexecuted_blocks=1 00:38:41.221 00:38:41.221 ' 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:41.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:41.221 --rc genhtml_branch_coverage=1 00:38:41.221 --rc genhtml_function_coverage=1 00:38:41.221 --rc genhtml_legend=1 00:38:41.221 --rc geninfo_all_blocks=1 00:38:41.221 --rc geninfo_unexecuted_blocks=1 00:38:41.221 00:38:41.221 ' 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.221 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:41.222 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:38:41.222 Cannot find device "nvmf_init_br" 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:38:41.222 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:38:41.481 Cannot find device "nvmf_init_br2" 00:38:41.481 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:38:41.481 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:38:41.481 Cannot find device "nvmf_tgt_br" 00:38:41.481 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:38:41.481 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:38:41.481 Cannot find device "nvmf_tgt_br2" 00:38:41.481 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:38:41.481 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:38:41.481 Cannot find device "nvmf_init_br" 00:38:41.481 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:38:41.481 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:38:41.481 Cannot find device "nvmf_init_br2" 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:38:41.482 Cannot find device "nvmf_tgt_br" 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:38:41.482 Cannot find device "nvmf_tgt_br2" 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:38:41.482 Cannot find device "nvmf_br" 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:38:41.482 Cannot find device "nvmf_init_if" 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:38:41.482 Cannot find device "nvmf_init_if2" 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:41.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:41.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:41.482 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:38:41.742 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:41.742 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:38:41.742 00:38:41.742 --- 10.0.0.3 ping statistics --- 00:38:41.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.742 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:38:41.742 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:38:41.742 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:38:41.742 00:38:41.742 --- 10.0.0.4 ping statistics --- 00:38:41.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.742 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:41.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:41.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:38:41.742 00:38:41.742 --- 10.0.0.1 ping statistics --- 00:38:41.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.742 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:38:41.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:41.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:38:41.742 00:38:41.742 --- 10.0.0.2 ping statistics --- 00:38:41.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:41.742 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:38:41.742 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=92627 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 92627 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 92627 ']' 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:41.743 05:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.122 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:43.122 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:38:43.122 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:43.122 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:43.122 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f731ecf33a4588cf758dae771d9270d3 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.qpM 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f731ecf33a4588cf758dae771d9270d3 0 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f731ecf33a4588cf758dae771d9270d3 0 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f731ecf33a4588cf758dae771d9270d3 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.qpM 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.qpM 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.qpM 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=feaf85faa28b5d94ab5b348ea36e3fd7aa57f7135505667b8bba656ed57032bc 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mRn 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key feaf85faa28b5d94ab5b348ea36e3fd7aa57f7135505667b8bba656ed57032bc 3 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 feaf85faa28b5d94ab5b348ea36e3fd7aa57f7135505667b8bba656ed57032bc 3 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=feaf85faa28b5d94ab5b348ea36e3fd7aa57f7135505667b8bba656ed57032bc 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mRn 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mRn 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.mRn 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7d7133ab6eb885ba5735996de8cfaa1521c2e80a32456847 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.YVl 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7d7133ab6eb885ba5735996de8cfaa1521c2e80a32456847 0 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7d7133ab6eb885ba5735996de8cfaa1521c2e80a32456847 0 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7d7133ab6eb885ba5735996de8cfaa1521c2e80a32456847 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.YVl 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.YVl 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.YVl 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=03e32835bbc6141cd8f939346ce71f360f1dd5fdf8770bee 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.YGk 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 03e32835bbc6141cd8f939346ce71f360f1dd5fdf8770bee 2 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 03e32835bbc6141cd8f939346ce71f360f1dd5fdf8770bee 2 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=03e32835bbc6141cd8f939346ce71f360f1dd5fdf8770bee 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.YGk 00:38:43.123 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.YGk 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.YGk 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e47341ebda19e3e4cb5846eb354af2df 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2QT 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e47341ebda19e3e4cb5846eb354af2df 1 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e47341ebda19e3e4cb5846eb354af2df 1 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e47341ebda19e3e4cb5846eb354af2df 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:38:43.124 05:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2QT 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2QT 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.2QT 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=790103c492d208f28a1ec445d0c63884 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.oUo 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 790103c492d208f28a1ec445d0c63884 1 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 790103c492d208f28a1ec445d0c63884 1 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=790103c492d208f28a1ec445d0c63884 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:38:43.124 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:38:43.384 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.oUo 00:38:43.384 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.oUo 00:38:43.384 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.oUo 00:38:43.384 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:38:43.384 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:38:43.384 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:43.384 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:38:43.384 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:38:43.384 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:38:43.384 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:38:43.384 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0035de464737ebe3894bfd6cdd260d22541a9c9faeff03e4 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.y2l 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0035de464737ebe3894bfd6cdd260d22541a9c9faeff03e4 2 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0035de464737ebe3894bfd6cdd260d22541a9c9faeff03e4 2 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0035de464737ebe3894bfd6cdd260d22541a9c9faeff03e4 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.y2l 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.y2l 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.y2l 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2501143d8036acd0da7346ad9347171d 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.8o7 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2501143d8036acd0da7346ad9347171d 0 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2501143d8036acd0da7346ad9347171d 0 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2501143d8036acd0da7346ad9347171d 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.8o7 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.8o7 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.8o7 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=545f1aa81b2722c4262a005b8786aaf50f3078c0eba88a8a6a8a120953303c08 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.cKl 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 545f1aa81b2722c4262a005b8786aaf50f3078c0eba88a8a6a8a120953303c08 3 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 545f1aa81b2722c4262a005b8786aaf50f3078c0eba88a8a6a8a120953303c08 3 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=545f1aa81b2722c4262a005b8786aaf50f3078c0eba88a8a6a8a120953303c08 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.cKl 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.cKl 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.cKl 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92627 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 92627 ']' 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:43.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:43.385 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qpM 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.mRn ]] 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mRn 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.YVl 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.YGk ]] 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.YGk 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.2QT 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.oUo ]] 00:38:43.645 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oUo 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.y2l 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.8o7 ]] 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.8o7 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.cKl 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:43.646 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:43.904 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:43.904 05:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:44.162 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:44.162 Waiting for block devices as requested 00:38:44.162 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:44.421 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:44.989 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:44.989 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:44.989 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:44.989 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:38:44.989 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:44.989 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:44.989 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:44.989 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:44.989 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:38:44.989 No valid GPT data, bailing 00:38:44.989 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:38:45.248 No valid GPT data, bailing 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:45.248 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:38:45.249 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:38:45.249 05:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:38:45.249 No valid GPT data, bailing 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:38:45.249 No valid GPT data, bailing 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:38:45.249 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -a 10.0.0.1 -t tcp -s 4420 00:38:45.508 00:38:45.508 Discovery Log Number of Records 2, Generation counter 2 00:38:45.508 =====Discovery Log Entry 0====== 00:38:45.508 trtype: tcp 00:38:45.508 adrfam: ipv4 00:38:45.508 subtype: current discovery subsystem 00:38:45.508 treq: not specified, sq flow control disable supported 00:38:45.508 portid: 1 00:38:45.508 trsvcid: 4420 00:38:45.508 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:45.508 traddr: 10.0.0.1 00:38:45.508 eflags: none 00:38:45.508 sectype: none 00:38:45.508 =====Discovery Log Entry 1====== 00:38:45.508 trtype: tcp 00:38:45.508 adrfam: ipv4 00:38:45.508 subtype: nvme subsystem 00:38:45.508 treq: not specified, sq flow control disable supported 00:38:45.508 portid: 1 00:38:45.508 trsvcid: 4420 00:38:45.508 subnqn: nqn.2024-02.io.spdk:cnode0 00:38:45.508 traddr: 10.0.0.1 00:38:45.508 eflags: none 00:38:45.508 sectype: none 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:38:45.508 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.509 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.768 nvme0n1 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: ]] 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.768 nvme0n1 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.768 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.028 nvme0n1 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.028 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.288 nvme0n1 00:38:46.288 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.288 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:46.288 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:46.288 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.288 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.288 05:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: ]] 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.288 nvme0n1 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.288 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.547 nvme0n1 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:46.547 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: ]] 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.806 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.065 nvme0n1 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.065 nvme0n1 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.065 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.325 05:50:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.325 nvme0n1 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.325 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.584 nvme0n1 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:47.584 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.585 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.844 nvme0n1 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:47.844 05:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: ]] 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.411 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.670 nvme0n1 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:48.670 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.671 nvme0n1 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.671 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.930 nvme0n1 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:48.930 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: ]] 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.190 05:50:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.190 nvme0n1 00:38:49.190 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.190 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:49.190 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.190 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:49.190 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.190 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.448 nvme0n1 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.448 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:49.707 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.707 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:49.707 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:49.707 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:38:49.707 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:49.707 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:49.707 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:49.707 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:49.707 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:49.707 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:49.707 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:49.707 05:50:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: ]] 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.082 05:50:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:51.342 nvme0n1 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.342 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:51.601 nvme0n1 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.601 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:51.860 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.860 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:51.860 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:51.860 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:51.860 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:51.860 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:51.860 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:51.860 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:51.860 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:51.860 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:51.860 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:51.860 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:51.860 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:51.860 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.860 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.119 nvme0n1 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: ]] 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.119 05:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.379 nvme0n1 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.379 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.639 nvme0n1 00:38:52.639 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.639 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:52.639 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:52.639 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.639 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: ]] 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.898 05:50:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:53.467 nvme0n1 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:53.467 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.035 nvme0n1 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.035 05:50:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.602 nvme0n1 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: ]] 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:54.602 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:54.603 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:54.603 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:54.603 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:54.603 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:54.603 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:54.603 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:54.603 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:54.603 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:54.603 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.603 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.170 nvme0n1 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.170 05:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.738 nvme0n1 00:38:55.738 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.738 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:55.738 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:55.738 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.738 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.738 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.738 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:55.738 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:55.738 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.738 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.738 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.738 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: ]] 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.739 nvme0n1 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:55.739 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.998 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:55.998 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:55.998 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.998 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.999 nvme0n1 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.999 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.259 nvme0n1 00:38:56.259 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.259 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:56.259 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.259 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:56.259 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.259 05:50:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: ]] 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.259 nvme0n1 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.259 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.518 nvme0n1 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.518 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.778 nvme0n1 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.778 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.037 nvme0n1 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.037 nvme0n1 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:57.037 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: ]] 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:57.306 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:57.307 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.307 05:50:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.307 nvme0n1 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.307 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.597 nvme0n1 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: ]] 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.597 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.857 nvme0n1 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.857 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.117 nvme0n1 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.117 05:50:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.376 nvme0n1 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: ]] 00:38:58.376 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.377 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.636 nvme0n1 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.636 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.895 nvme0n1 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: ]] 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:58.895 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:58.896 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:58.896 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:58.896 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:58.896 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:58.896 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:58.896 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:58.896 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:58.896 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:38:58.896 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.896 05:50:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.154 nvme0n1 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.154 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.411 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.411 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:59.411 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:59.411 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:59.411 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:59.411 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:59.411 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:59.411 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:59.411 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:59.411 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:59.411 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:59.411 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:59.411 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:38:59.411 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.411 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.669 nvme0n1 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.669 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.928 nvme0n1 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: ]] 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.928 05:50:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.495 nvme0n1 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:00.495 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.496 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.755 nvme0n1 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: ]] 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.755 05:50:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.323 nvme0n1 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.323 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.892 nvme0n1 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:01.892 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:01.893 05:50:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.460 nvme0n1 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: ]] 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:02.460 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:02.461 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:02.461 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:02.461 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:02.461 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:02.461 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:02.461 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:02.461 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:02.461 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:02.461 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.461 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.029 nvme0n1 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.029 05:50:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.598 nvme0n1 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: ]] 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.598 nvme0n1 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.598 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:03.858 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.859 nvme0n1 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.859 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.118 nvme0n1 00:39:04.118 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.118 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:04.118 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.118 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: ]] 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.119 nvme0n1 00:39:04.119 05:50:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.119 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:04.119 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:04.119 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.119 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.119 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.378 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:04.378 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:04.378 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.378 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.378 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.378 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:04.378 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:39:04.378 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:04.378 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:04.378 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:04.378 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:04.378 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:39:04.378 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:04.378 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:04.378 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.379 nvme0n1 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: ]] 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.379 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.638 nvme0n1 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.638 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.638 nvme0n1 00:39:04.639 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.639 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:04.639 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:04.639 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.639 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.898 nvme0n1 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: ]] 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:04.898 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.899 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.158 nvme0n1 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.158 05:50:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.417 nvme0n1 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: ]] 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:39:05.417 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.418 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.677 nvme0n1 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:05.677 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:05.678 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:05.678 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:05.678 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:05.678 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:05.678 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.678 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.678 nvme0n1 00:39:05.678 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:05.936 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:05.937 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:05.937 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:05.937 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:05.937 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:05.937 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:05.937 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:05.937 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.937 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.937 nvme0n1 00:39:05.937 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.937 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:05.937 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:05.937 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.937 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: ]] 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.196 05:50:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.196 nvme0n1 00:39:06.196 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.196 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:06.196 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.196 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.196 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:06.196 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.455 nvme0n1 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.455 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.456 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:06.456 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:06.456 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.456 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.456 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.456 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:06.456 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:06.456 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: ]] 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:06.715 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:06.716 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:06.716 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:06.716 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:06.716 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:06.716 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:06.716 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:06.716 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.716 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.976 nvme0n1 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.976 05:50:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.235 nvme0n1 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.235 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.494 nvme0n1 00:39:07.494 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.494 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:07.494 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:07.494 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.494 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.494 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: ]] 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.753 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.754 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.754 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:07.754 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:07.754 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:07.754 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:07.754 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:07.754 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:07.754 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:07.754 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:07.754 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:07.754 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:07.754 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:07.754 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:07.754 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.754 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.013 nvme0n1 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.013 05:50:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.272 nvme0n1 00:39:08.272 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.272 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:08.272 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:08.272 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.272 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.272 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.272 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:08.272 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:08.272 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.272 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.272 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjczMWVjZjMzYTQ1ODhjZjc1OGRhZTc3MWQ5MjcwZDM4QORF: 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: ]] 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhZjg1ZmFhMjhiNWQ5NGFiNWIzNDhlYTM2ZTNmZDdhYTU3ZjcxMzU1MDU2NjdiOGJiYTY1NmVkNTcwMzJiY0TsSlo=: 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.531 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.790 nvme0n1 00:39:08.790 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.790 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:08.790 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.790 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:08.790 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.790 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.050 05:50:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.618 nvme0n1 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:09.618 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.619 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.878 nvme0n1 00:39:09.878 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDAzNWRlNDY0NzM3ZWJlMzg5NGJmZDZjZGQyNjBkMjI1NDFhOWM5ZmFlZmYwM2U06+V5ww==: 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: ]] 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjUwMTE0M2Q4MDM2YWNkMGRhNzM0NmFkOTM0NzE3MWT7tckl: 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:10.138 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.139 05:50:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.707 nvme0n1 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTQ1ZjFhYTgxYjI3MjJjNDI2MmEwMDViODc4NmFhZjUwZjMwNzhjMGViYTg4YThhNmE4YTEyMDk1MzMwM2MwOOJUz3M=: 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:10.707 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:10.708 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:10.708 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:10.708 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:10.708 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.708 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.277 nvme0n1 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.277 05:50:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.277 2024/11/20 05:50:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:39:11.277 request: 00:39:11.277 { 00:39:11.277 "method": "bdev_nvme_attach_controller", 00:39:11.277 "params": { 00:39:11.277 "name": "nvme0", 00:39:11.277 "trtype": "tcp", 00:39:11.277 "traddr": "10.0.0.1", 00:39:11.277 "adrfam": "ipv4", 00:39:11.277 "trsvcid": "4420", 00:39:11.277 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:39:11.277 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:39:11.277 "prchk_reftag": false, 00:39:11.277 "prchk_guard": false, 00:39:11.277 "hdgst": false, 00:39:11.277 "ddgst": false, 00:39:11.277 "allow_unrecognized_csi": false 00:39:11.277 } 00:39:11.277 } 00:39:11.277 Got JSON-RPC error response 00:39:11.277 GoRPCClient: error on JSON-RPC call 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:11.277 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.278 2024/11/20 05:50:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:39:11.278 request: 00:39:11.278 { 00:39:11.278 "method": "bdev_nvme_attach_controller", 00:39:11.278 "params": { 00:39:11.278 "name": "nvme0", 00:39:11.278 "trtype": "tcp", 00:39:11.278 "traddr": "10.0.0.1", 00:39:11.278 "adrfam": "ipv4", 00:39:11.278 "trsvcid": "4420", 00:39:11.278 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:39:11.278 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:39:11.278 "prchk_reftag": false, 00:39:11.278 "prchk_guard": false, 00:39:11.278 "hdgst": false, 00:39:11.278 "ddgst": false, 00:39:11.278 "dhchap_key": "key2", 00:39:11.278 "allow_unrecognized_csi": false 00:39:11.278 } 00:39:11.278 } 00:39:11.278 Got JSON-RPC error response 00:39:11.278 GoRPCClient: error on JSON-RPC call 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.278 2024/11/20 05:50:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:39:11.278 request: 00:39:11.278 { 00:39:11.278 "method": "bdev_nvme_attach_controller", 00:39:11.278 "params": { 00:39:11.278 "name": "nvme0", 00:39:11.278 "trtype": "tcp", 00:39:11.278 "traddr": "10.0.0.1", 00:39:11.278 "adrfam": "ipv4", 00:39:11.278 "trsvcid": "4420", 00:39:11.278 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:39:11.278 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:39:11.278 "prchk_reftag": false, 00:39:11.278 "prchk_guard": false, 00:39:11.278 "hdgst": false, 00:39:11.278 "ddgst": false, 00:39:11.278 "dhchap_key": "key1", 00:39:11.278 "dhchap_ctrlr_key": "ckey2", 00:39:11.278 "allow_unrecognized_csi": false 00:39:11.278 } 00:39:11.278 } 00:39:11.278 Got JSON-RPC error response 00:39:11.278 GoRPCClient: error on JSON-RPC call 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:11.278 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.538 nvme0n1 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.538 2024/11/20 05:50:31 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-5 Msg=Input/output error 00:39:11.538 request: 00:39:11.538 { 00:39:11.538 "method": "bdev_nvme_set_keys", 00:39:11.538 "params": { 00:39:11.538 "name": "nvme0", 00:39:11.538 "dhchap_key": "key1", 00:39:11.538 "dhchap_ctrlr_key": "ckey2" 00:39:11.538 } 00:39:11.538 } 00:39:11.538 Got JSON-RPC error response 00:39:11.538 GoRPCClient: error on JSON-RPC call 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:39:11.538 05:50:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2Q3MTMzYWI2ZWI4ODViYTU3MzU5OTZkZThjZmFhMTUyMWMyZTgwYTMyNDU2ODQ3zougFw==: 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: ]] 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDNlMzI4MzViYmM2MTQxY2Q4ZjkzOTM0NmNlNzFmMzYwZjFkZDVmZGY4NzcwYmVlN17qWw==: 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.917 nvme0n1 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTQ3MzQxZWJkYTE5ZTNlNGNiNTg0NmViMzU0YWYyZGY/0gWF: 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: ]] 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzkwMTAzYzQ5MmQyMDhmMjhhMWVjNDQ1ZDBjNjM4ODTfNFyG: 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.917 2024/11/20 05:50:32 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:39:12.917 request: 00:39:12.917 { 00:39:12.917 "method": "bdev_nvme_set_keys", 00:39:12.917 "params": { 00:39:12.917 "name": "nvme0", 00:39:12.917 "dhchap_key": "key2", 00:39:12.917 "dhchap_ctrlr_key": "ckey1" 00:39:12.917 } 00:39:12.917 } 00:39:12.917 Got JSON-RPC error response 00:39:12.917 GoRPCClient: error on JSON-RPC call 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:39:12.917 05:50:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:13.855 rmmod nvme_tcp 00:39:13.855 rmmod nvme_fabrics 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 92627 ']' 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 92627 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 92627 ']' 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 92627 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:13.855 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 92627 00:39:14.114 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:14.114 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:14.114 killing process with pid 92627 00:39:14.114 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 92627' 00:39:14.114 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 92627 00:39:14.114 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 92627 00:39:14.114 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:14.114 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:14.114 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:14.114 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:39:14.114 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:39:14.114 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:14.114 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:39:14.115 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:14.115 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:39:14.115 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:39:14.115 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:39:14.115 05:50:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:39:14.115 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:39:14.374 05:50:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:15.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:15.312 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:39:15.312 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:39:15.571 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.qpM /tmp/spdk.key-null.YVl /tmp/spdk.key-sha256.2QT /tmp/spdk.key-sha384.y2l /tmp/spdk.key-sha512.cKl /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:39:15.571 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:15.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:15.830 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:39:15.830 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:39:16.089 ************************************ 00:39:16.089 END TEST nvmf_auth_host 00:39:16.089 ************************************ 00:39:16.089 00:39:16.089 real 0m34.902s 00:39:16.089 user 0m32.078s 00:39:16.089 sys 0m4.727s 00:39:16.089 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:16.089 05:50:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.089 05:50:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:39:16.089 05:50:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:39:16.089 05:50:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:16.089 05:50:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:16.089 05:50:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.089 ************************************ 00:39:16.089 START TEST nvmf_digest 00:39:16.089 ************************************ 00:39:16.089 05:50:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:39:16.089 * Looking for test storage... 00:39:16.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:39:16.089 05:50:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:16.089 05:50:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:16.089 05:50:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:16.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.350 --rc genhtml_branch_coverage=1 00:39:16.350 --rc genhtml_function_coverage=1 00:39:16.350 --rc genhtml_legend=1 00:39:16.350 --rc geninfo_all_blocks=1 00:39:16.350 --rc geninfo_unexecuted_blocks=1 00:39:16.350 00:39:16.350 ' 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:16.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.350 --rc genhtml_branch_coverage=1 00:39:16.350 --rc genhtml_function_coverage=1 00:39:16.350 --rc genhtml_legend=1 00:39:16.350 --rc geninfo_all_blocks=1 00:39:16.350 --rc geninfo_unexecuted_blocks=1 00:39:16.350 00:39:16.350 ' 00:39:16.350 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:16.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.350 --rc genhtml_branch_coverage=1 00:39:16.350 --rc genhtml_function_coverage=1 00:39:16.350 --rc genhtml_legend=1 00:39:16.350 --rc geninfo_all_blocks=1 00:39:16.351 --rc geninfo_unexecuted_blocks=1 00:39:16.351 00:39:16.351 ' 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:16.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.351 --rc genhtml_branch_coverage=1 00:39:16.351 --rc genhtml_function_coverage=1 00:39:16.351 --rc genhtml_legend=1 00:39:16.351 --rc geninfo_all_blocks=1 00:39:16.351 --rc geninfo_unexecuted_blocks=1 00:39:16.351 00:39:16.351 ' 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:16.351 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:39:16.351 Cannot find device "nvmf_init_br" 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:39:16.351 Cannot find device "nvmf_init_br2" 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:39:16.351 Cannot find device "nvmf_tgt_br" 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:39:16.351 Cannot find device "nvmf_tgt_br2" 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:39:16.351 Cannot find device "nvmf_init_br" 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:39:16.351 Cannot find device "nvmf_init_br2" 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:39:16.351 Cannot find device "nvmf_tgt_br" 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:39:16.351 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:39:16.351 Cannot find device "nvmf_tgt_br2" 00:39:16.352 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:39:16.352 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:39:16.352 Cannot find device "nvmf_br" 00:39:16.352 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:39:16.352 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:39:16.352 Cannot find device "nvmf_init_if" 00:39:16.352 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:39:16.352 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:39:16.352 Cannot find device "nvmf_init_if2" 00:39:16.352 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:39:16.352 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:16.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:16.352 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:39:16.352 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:16.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:16.352 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:39:16.352 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:39:16.352 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:16.352 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:39:16.612 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:16.612 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:39:16.612 00:39:16.612 --- 10.0.0.3 ping statistics --- 00:39:16.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:16.612 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:39:16.612 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:16.612 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:39:16.612 00:39:16.612 --- 10.0.0.4 ping statistics --- 00:39:16.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:16.612 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:16.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:16.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:39:16.612 00:39:16.612 --- 10.0.0.1 ping statistics --- 00:39:16.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:16.612 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:39:16.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:16.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:39:16.612 00:39:16.612 --- 10.0.0.2 ping statistics --- 00:39:16.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:16.612 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:16.612 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:16.872 ************************************ 00:39:16.872 START TEST nvmf_digest_clean 00:39:16.872 ************************************ 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=94282 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 94282 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 94282 ']' 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:16.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:16.872 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:16.872 [2024-11-20 05:50:36.605616] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:16.872 [2024-11-20 05:50:36.605741] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:16.872 [2024-11-20 05:50:36.759189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:17.143 [2024-11-20 05:50:36.802681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:17.143 [2024-11-20 05:50:36.802720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:17.143 [2024-11-20 05:50:36.802729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:17.143 [2024-11-20 05:50:36.802737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:17.143 [2024-11-20 05:50:36.802759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:17.143 [2024-11-20 05:50:36.803020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:17.143 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:17.143 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:39:17.143 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:17.143 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:17.143 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:17.143 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:17.143 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:39:17.143 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:39:17.143 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:39:17.143 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:17.143 05:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:17.143 null0 00:39:17.143 [2024-11-20 05:50:36.994246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:17.143 [2024-11-20 05:50:37.018351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:17.143 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:17.143 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:39:17.143 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:39:17.143 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:39:17.143 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:39:17.143 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:39:17.143 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:39:17.143 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:39:17.143 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:39:17.143 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94319 00:39:17.144 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94319 /var/tmp/bperf.sock 00:39:17.144 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 94319 ']' 00:39:17.144 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:17.144 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:17.144 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:17.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:17.144 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:17.144 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:17.440 [2024-11-20 05:50:37.083570] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:17.440 [2024-11-20 05:50:37.083831] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94319 ] 00:39:17.440 [2024-11-20 05:50:37.243749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:17.440 [2024-11-20 05:50:37.297936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:17.440 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:17.440 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:39:17.440 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:39:17.440 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:39:17.440 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:18.051 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:18.051 05:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:18.310 nvme0n1 00:39:18.310 05:50:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:39:18.310 05:50:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:18.310 Running I/O for 2 seconds... 00:39:20.182 24541.00 IOPS, 95.86 MiB/s [2024-11-20T05:50:40.101Z] 24639.00 IOPS, 96.25 MiB/s 00:39:20.182 Latency(us) 00:39:20.182 [2024-11-20T05:50:40.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:20.182 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:20.182 nvme0n1 : 2.00 24660.66 96.33 0.00 0.00 5185.42 2777.48 10236.10 00:39:20.182 [2024-11-20T05:50:40.101Z] =================================================================================================================== 00:39:20.182 [2024-11-20T05:50:40.101Z] Total : 24660.66 96.33 0.00 0.00 5185.42 2777.48 10236.10 00:39:20.182 { 00:39:20.182 "results": [ 00:39:20.182 { 00:39:20.182 "job": "nvme0n1", 00:39:20.182 "core_mask": "0x2", 00:39:20.182 "workload": "randread", 00:39:20.182 "status": "finished", 00:39:20.182 "queue_depth": 128, 00:39:20.182 "io_size": 4096, 00:39:20.182 "runtime": 2.003434, 00:39:20.182 "iops": 24660.657650813555, 00:39:20.182 "mibps": 96.33069394849045, 00:39:20.182 "io_failed": 0, 00:39:20.182 "io_timeout": 0, 00:39:20.182 "avg_latency_us": 5185.421212229862, 00:39:20.182 "min_latency_us": 2777.478095238095, 00:39:20.182 "max_latency_us": 10236.099047619047 00:39:20.182 } 00:39:20.182 ], 00:39:20.182 "core_count": 1 00:39:20.182 } 00:39:20.441 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:20.441 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:20.441 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:20.441 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:20.441 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:20.441 | select(.opcode=="crc32c") 00:39:20.441 | "\(.module_name) \(.executed)"' 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94319 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 94319 ']' 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 94319 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94319 00:39:20.700 killing process with pid 94319 00:39:20.700 Received shutdown signal, test time was about 2.000000 seconds 00:39:20.700 00:39:20.700 Latency(us) 00:39:20.700 [2024-11-20T05:50:40.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:20.700 [2024-11-20T05:50:40.619Z] =================================================================================================================== 00:39:20.700 [2024-11-20T05:50:40.619Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94319' 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 94319 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 94319 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94390 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94390 /var/tmp/bperf.sock 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 94390 ']' 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:20.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:20.700 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:20.959 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:20.959 Zero copy mechanism will not be used. 00:39:20.959 [2024-11-20 05:50:40.619034] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:20.959 [2024-11-20 05:50:40.619101] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94390 ] 00:39:20.959 [2024-11-20 05:50:40.757244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:20.959 [2024-11-20 05:50:40.800291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:20.959 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:20.959 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:39:20.959 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:39:20.959 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:39:20.959 05:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:21.527 05:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:21.527 05:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:21.786 nvme0n1 00:39:21.786 05:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:39:21.786 05:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:21.786 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:21.786 Zero copy mechanism will not be used. 00:39:21.786 Running I/O for 2 seconds... 00:39:24.104 10011.00 IOPS, 1251.38 MiB/s [2024-11-20T05:50:44.023Z] 9989.00 IOPS, 1248.62 MiB/s 00:39:24.104 Latency(us) 00:39:24.104 [2024-11-20T05:50:44.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:24.104 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:39:24.104 nvme0n1 : 2.00 9987.17 1248.40 0.00 0.00 1599.42 473.97 9237.46 00:39:24.104 [2024-11-20T05:50:44.023Z] =================================================================================================================== 00:39:24.104 [2024-11-20T05:50:44.023Z] Total : 9987.17 1248.40 0.00 0.00 1599.42 473.97 9237.46 00:39:24.104 { 00:39:24.104 "results": [ 00:39:24.104 { 00:39:24.104 "job": "nvme0n1", 00:39:24.104 "core_mask": "0x2", 00:39:24.104 "workload": "randread", 00:39:24.104 "status": "finished", 00:39:24.104 "queue_depth": 16, 00:39:24.104 "io_size": 131072, 00:39:24.104 "runtime": 2.001968, 00:39:24.104 "iops": 9987.172622139815, 00:39:24.104 "mibps": 1248.3965777674769, 00:39:24.104 "io_failed": 0, 00:39:24.104 "io_timeout": 0, 00:39:24.104 "avg_latency_us": 1599.4165228616205, 00:39:24.104 "min_latency_us": 473.9657142857143, 00:39:24.104 "max_latency_us": 9237.455238095237 00:39:24.104 } 00:39:24.104 ], 00:39:24.104 "core_count": 1 00:39:24.104 } 00:39:24.104 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:24.104 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:24.105 | select(.opcode=="crc32c") 00:39:24.105 | "\(.module_name) \(.executed)"' 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94390 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 94390 ']' 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 94390 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94390 00:39:24.105 killing process with pid 94390 00:39:24.105 Received shutdown signal, test time was about 2.000000 seconds 00:39:24.105 00:39:24.105 Latency(us) 00:39:24.105 [2024-11-20T05:50:44.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:24.105 [2024-11-20T05:50:44.024Z] =================================================================================================================== 00:39:24.105 [2024-11-20T05:50:44.024Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94390' 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 94390 00:39:24.105 05:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 94390 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94467 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94467 /var/tmp/bperf.sock 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 94467 ']' 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:24.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:24.364 05:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:24.364 [2024-11-20 05:50:44.168629] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:24.364 [2024-11-20 05:50:44.168714] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94467 ] 00:39:24.623 [2024-11-20 05:50:44.306077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:24.623 [2024-11-20 05:50:44.351786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:25.191 05:50:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:25.191 05:50:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:39:25.191 05:50:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:39:25.191 05:50:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:39:25.191 05:50:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:25.759 05:50:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:25.759 05:50:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:26.018 nvme0n1 00:39:26.018 05:50:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:39:26.018 05:50:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:26.018 Running I/O for 2 seconds... 00:39:27.891 29038.00 IOPS, 113.43 MiB/s [2024-11-20T05:50:47.810Z] 29280.50 IOPS, 114.38 MiB/s 00:39:27.891 Latency(us) 00:39:27.891 [2024-11-20T05:50:47.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:27.891 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:27.891 nvme0n1 : 2.01 29268.91 114.33 0.00 0.00 4369.05 2262.55 12607.88 00:39:27.891 [2024-11-20T05:50:47.810Z] =================================================================================================================== 00:39:27.891 [2024-11-20T05:50:47.810Z] Total : 29268.91 114.33 0.00 0.00 4369.05 2262.55 12607.88 00:39:27.891 { 00:39:27.891 "results": [ 00:39:27.891 { 00:39:27.891 "job": "nvme0n1", 00:39:27.891 "core_mask": "0x2", 00:39:27.891 "workload": "randwrite", 00:39:27.891 "status": "finished", 00:39:27.891 "queue_depth": 128, 00:39:27.892 "io_size": 4096, 00:39:27.892 "runtime": 2.006771, 00:39:27.892 "iops": 29268.910104840063, 00:39:27.892 "mibps": 114.3316800970315, 00:39:27.892 "io_failed": 0, 00:39:27.892 "io_timeout": 0, 00:39:27.892 "avg_latency_us": 4369.052985173366, 00:39:27.892 "min_latency_us": 2262.552380952381, 00:39:27.892 "max_latency_us": 12607.878095238095 00:39:27.892 } 00:39:27.892 ], 00:39:27.892 "core_count": 1 00:39:27.892 } 00:39:28.151 05:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:28.151 05:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:28.151 05:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:28.151 05:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:28.151 05:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:28.151 | select(.opcode=="crc32c") 00:39:28.151 | "\(.module_name) \(.executed)"' 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94467 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 94467 ']' 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 94467 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94467 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94467' 00:39:28.411 killing process with pid 94467 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 94467 00:39:28.411 Received shutdown signal, test time was about 2.000000 seconds 00:39:28.411 00:39:28.411 Latency(us) 00:39:28.411 [2024-11-20T05:50:48.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:28.411 [2024-11-20T05:50:48.330Z] =================================================================================================================== 00:39:28.411 [2024-11-20T05:50:48.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 94467 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94558 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94558 /var/tmp/bperf.sock 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 94558 ']' 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:28.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:28.411 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:28.670 [2024-11-20 05:50:48.330017] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:28.670 [2024-11-20 05:50:48.330246] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:39:28.670 Zero copy mechanism will not be used. 00:39:28.670 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94558 ] 00:39:28.670 [2024-11-20 05:50:48.469788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:28.670 [2024-11-20 05:50:48.513023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:28.670 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:28.670 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:39:28.670 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:39:28.670 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:39:28.670 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:29.237 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:29.237 05:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:29.237 nvme0n1 00:39:29.495 05:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:39:29.495 05:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:29.495 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:29.495 Zero copy mechanism will not be used. 00:39:29.495 Running I/O for 2 seconds... 00:39:31.406 9271.00 IOPS, 1158.88 MiB/s [2024-11-20T05:50:51.325Z] 9540.50 IOPS, 1192.56 MiB/s 00:39:31.406 Latency(us) 00:39:31.406 [2024-11-20T05:50:51.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:31.406 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:39:31.406 nvme0n1 : 2.00 9537.68 1192.21 0.00 0.00 1674.36 1100.07 5492.54 00:39:31.406 [2024-11-20T05:50:51.325Z] =================================================================================================================== 00:39:31.406 [2024-11-20T05:50:51.325Z] Total : 9537.68 1192.21 0.00 0.00 1674.36 1100.07 5492.54 00:39:31.406 { 00:39:31.406 "results": [ 00:39:31.406 { 00:39:31.406 "job": "nvme0n1", 00:39:31.406 "core_mask": "0x2", 00:39:31.406 "workload": "randwrite", 00:39:31.406 "status": "finished", 00:39:31.406 "queue_depth": 16, 00:39:31.406 "io_size": 131072, 00:39:31.406 "runtime": 2.002794, 00:39:31.406 "iops": 9537.675866814061, 00:39:31.406 "mibps": 1192.2094833517576, 00:39:31.406 "io_failed": 0, 00:39:31.406 "io_timeout": 0, 00:39:31.406 "avg_latency_us": 1674.3572971167318, 00:39:31.406 "min_latency_us": 1100.0685714285714, 00:39:31.406 "max_latency_us": 5492.540952380952 00:39:31.407 } 00:39:31.407 ], 00:39:31.407 "core_count": 1 00:39:31.407 } 00:39:31.407 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:39:31.407 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:39:31.407 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:39:31.407 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:39:31.407 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:39:31.407 | select(.opcode=="crc32c") 00:39:31.407 | "\(.module_name) \(.executed)"' 00:39:31.664 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:39:31.664 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:39:31.664 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:39:31.664 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:39:31.664 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94558 00:39:31.664 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 94558 ']' 00:39:31.664 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 94558 00:39:31.664 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:39:31.665 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:31.665 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94558 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:31.923 killing process with pid 94558 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94558' 00:39:31.923 Received shutdown signal, test time was about 2.000000 seconds 00:39:31.923 00:39:31.923 Latency(us) 00:39:31.923 [2024-11-20T05:50:51.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:31.923 [2024-11-20T05:50:51.842Z] =================================================================================================================== 00:39:31.923 [2024-11-20T05:50:51.842Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 94558 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 94558 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94282 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 94282 ']' 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 94282 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94282 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:31.923 killing process with pid 94282 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94282' 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 94282 00:39:31.923 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 94282 00:39:32.182 00:39:32.182 real 0m15.416s 00:39:32.182 user 0m29.065s 00:39:32.182 sys 0m4.807s 00:39:32.182 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:32.182 05:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:32.182 ************************************ 00:39:32.182 END TEST nvmf_digest_clean 00:39:32.182 ************************************ 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:32.182 ************************************ 00:39:32.182 START TEST nvmf_digest_error 00:39:32.182 ************************************ 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=94652 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 94652 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 94652 ']' 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:32.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:32.182 05:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:32.182 [2024-11-20 05:50:52.081487] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:32.182 [2024-11-20 05:50:52.081586] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:32.442 [2024-11-20 05:50:52.228727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:32.442 [2024-11-20 05:50:52.273557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:32.442 [2024-11-20 05:50:52.273620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:32.442 [2024-11-20 05:50:52.273630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:32.442 [2024-11-20 05:50:52.273639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:32.442 [2024-11-20 05:50:52.273646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:32.442 [2024-11-20 05:50:52.273909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:33.378 [2024-11-20 05:50:53.102352] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:33.378 null0 00:39:33.378 [2024-11-20 05:50:53.197393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:33.378 [2024-11-20 05:50:53.221495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94701 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94701 /var/tmp/bperf.sock 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 94701 ']' 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:33.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:33.378 05:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:33.378 [2024-11-20 05:50:53.263069] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:33.378 [2024-11-20 05:50:53.263136] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94701 ] 00:39:33.637 [2024-11-20 05:50:53.402370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:33.637 [2024-11-20 05:50:53.447835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:34.574 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:34.574 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:39:34.574 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:34.574 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:34.574 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:34.575 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:34.575 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:34.575 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.575 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:34.575 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:35.142 nvme0n1 00:39:35.142 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:39:35.142 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.142 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:35.142 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.142 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:35.142 05:50:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:35.142 Running I/O for 2 seconds... 00:39:35.142 [2024-11-20 05:50:54.923374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.142 [2024-11-20 05:50:54.923436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.142 [2024-11-20 05:50:54.923449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.142 [2024-11-20 05:50:54.932725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.142 [2024-11-20 05:50:54.932759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.142 [2024-11-20 05:50:54.932770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.142 [2024-11-20 05:50:54.942931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.142 [2024-11-20 05:50:54.942964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.142 [2024-11-20 05:50:54.942992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.142 [2024-11-20 05:50:54.953433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.142 [2024-11-20 05:50:54.953480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.142 [2024-11-20 05:50:54.953492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.142 [2024-11-20 05:50:54.964303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.142 [2024-11-20 05:50:54.964339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.142 [2024-11-20 05:50:54.964351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.142 [2024-11-20 05:50:54.975580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.142 [2024-11-20 05:50:54.975614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.142 [2024-11-20 05:50:54.975626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.142 [2024-11-20 05:50:54.986897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.142 [2024-11-20 05:50:54.986931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.142 [2024-11-20 05:50:54.986942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.142 [2024-11-20 05:50:54.998021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.142 [2024-11-20 05:50:54.998055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.142 [2024-11-20 05:50:54.998068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.142 [2024-11-20 05:50:55.006665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.142 [2024-11-20 05:50:55.006697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.142 [2024-11-20 05:50:55.006709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.142 [2024-11-20 05:50:55.017634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.142 [2024-11-20 05:50:55.017667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.142 [2024-11-20 05:50:55.017694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.143 [2024-11-20 05:50:55.028142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.143 [2024-11-20 05:50:55.028176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.143 [2024-11-20 05:50:55.028204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.143 [2024-11-20 05:50:55.038689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.143 [2024-11-20 05:50:55.038722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.143 [2024-11-20 05:50:55.038733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.143 [2024-11-20 05:50:55.049121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.143 [2024-11-20 05:50:55.049154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.143 [2024-11-20 05:50:55.049166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.402 [2024-11-20 05:50:55.060270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.402 [2024-11-20 05:50:55.060305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.402 [2024-11-20 05:50:55.060332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.402 [2024-11-20 05:50:55.070767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.402 [2024-11-20 05:50:55.070802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.402 [2024-11-20 05:50:55.070829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.402 [2024-11-20 05:50:55.081159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.402 [2024-11-20 05:50:55.081193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.402 [2024-11-20 05:50:55.081204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.402 [2024-11-20 05:50:55.092116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.402 [2024-11-20 05:50:55.092150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.402 [2024-11-20 05:50:55.092161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.402 [2024-11-20 05:50:55.102972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.402 [2024-11-20 05:50:55.103007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.402 [2024-11-20 05:50:55.103035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.402 [2024-11-20 05:50:55.113373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.402 [2024-11-20 05:50:55.113407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.402 [2024-11-20 05:50:55.113418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.402 [2024-11-20 05:50:55.123795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.402 [2024-11-20 05:50:55.123829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.402 [2024-11-20 05:50:55.123840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.402 [2024-11-20 05:50:55.134191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.402 [2024-11-20 05:50:55.134222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.402 [2024-11-20 05:50:55.134234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.402 [2024-11-20 05:50:55.144526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.402 [2024-11-20 05:50:55.144559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.402 [2024-11-20 05:50:55.144570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.402 [2024-11-20 05:50:55.155260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.402 [2024-11-20 05:50:55.155292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.402 [2024-11-20 05:50:55.155319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.402 [2024-11-20 05:50:55.165828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.402 [2024-11-20 05:50:55.165860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.402 [2024-11-20 05:50:55.165888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.402 [2024-11-20 05:50:55.176227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.402 [2024-11-20 05:50:55.176261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.402 [2024-11-20 05:50:55.176272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.402 [2024-11-20 05:50:55.186561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.403 [2024-11-20 05:50:55.186592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.403 [2024-11-20 05:50:55.186619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.403 [2024-11-20 05:50:55.196938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.403 [2024-11-20 05:50:55.196971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.403 [2024-11-20 05:50:55.196982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.403 [2024-11-20 05:50:55.207249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.403 [2024-11-20 05:50:55.207279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.403 [2024-11-20 05:50:55.207306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.403 [2024-11-20 05:50:55.217642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.403 [2024-11-20 05:50:55.217673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.403 [2024-11-20 05:50:55.217699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.403 [2024-11-20 05:50:55.228122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.403 [2024-11-20 05:50:55.228158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.403 [2024-11-20 05:50:55.228169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.403 [2024-11-20 05:50:55.238395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.403 [2024-11-20 05:50:55.238425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.403 [2024-11-20 05:50:55.238435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.403 [2024-11-20 05:50:55.248654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.403 [2024-11-20 05:50:55.248686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.403 [2024-11-20 05:50:55.248697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.403 [2024-11-20 05:50:55.259532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.403 [2024-11-20 05:50:55.259564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.403 [2024-11-20 05:50:55.259576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.403 [2024-11-20 05:50:55.270195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.403 [2024-11-20 05:50:55.270229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.403 [2024-11-20 05:50:55.270241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.403 [2024-11-20 05:50:55.280775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.403 [2024-11-20 05:50:55.280806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.403 [2024-11-20 05:50:55.280817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.403 [2024-11-20 05:50:55.291652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.403 [2024-11-20 05:50:55.291686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.403 [2024-11-20 05:50:55.291697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.403 [2024-11-20 05:50:55.302413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.403 [2024-11-20 05:50:55.302460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.403 [2024-11-20 05:50:55.302472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.403 [2024-11-20 05:50:55.311878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.403 [2024-11-20 05:50:55.311912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.403 [2024-11-20 05:50:55.311923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.322687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.322717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.322744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.333072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.333106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.333117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.343628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.343662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.343673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.354032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.354063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.354090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.364747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.364781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.364792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.375005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.375036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.375063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.385327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.385362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.385373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.396164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.396197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.396208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.406767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.406798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.406824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.417740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.417771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.417799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.428002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.428035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.428046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.438816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.438847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.438874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.448558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.448590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.448617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.458803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.458838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.458849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.469034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.469066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.469093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.479647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.479681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.479692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.489980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.490013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.490041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.500200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.500235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.500246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.510294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.510325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.510353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.520514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.520546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.520557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.531184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.531215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.531241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.541510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.541542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.541569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.552075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.552109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.552120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.561477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.561507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.561534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.663 [2024-11-20 05:50:55.571671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.663 [2024-11-20 05:50:55.571705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.663 [2024-11-20 05:50:55.571716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.581756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.581787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.581814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.591873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.591907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.591918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.601953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.601984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.602011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.613929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.613959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.613986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.624216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.624249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.624260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.634355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.634386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.634412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.644795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.644825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.644852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.655481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.655512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.655523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.665774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.665804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.665831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.676026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.676060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.676071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.686173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.686205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.686232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.694996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.695027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.695054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.707324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.707378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.707389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.717462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.717495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.717507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.728413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.728454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.728466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.739536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.739569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.739580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.750055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.750085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.750112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.760653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.760687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.760698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.770869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.770900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.770926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.781154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.923 [2024-11-20 05:50:55.781189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.923 [2024-11-20 05:50:55.781200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.923 [2024-11-20 05:50:55.791356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.924 [2024-11-20 05:50:55.791408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.924 [2024-11-20 05:50:55.791419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.924 [2024-11-20 05:50:55.801682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.924 [2024-11-20 05:50:55.801712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.924 [2024-11-20 05:50:55.801738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.924 [2024-11-20 05:50:55.811940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.924 [2024-11-20 05:50:55.811973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.924 [2024-11-20 05:50:55.811984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.924 [2024-11-20 05:50:55.822150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.924 [2024-11-20 05:50:55.822181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.924 [2024-11-20 05:50:55.822207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:35.924 [2024-11-20 05:50:55.833027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:35.924 [2024-11-20 05:50:55.833061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:35.924 [2024-11-20 05:50:55.833072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:55.842236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.842268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.842295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:55.853308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.853344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.853355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:55.863567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.863599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.863610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:55.873797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.873829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.873855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:55.884035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.884067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.884078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:55.894348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.894381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.894408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:55.904711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.904741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.904767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 24324.00 IOPS, 95.02 MiB/s [2024-11-20T05:50:56.102Z] [2024-11-20 05:50:55.916097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.916134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.916145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:55.925986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.926019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.926046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:55.936427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.936469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.936481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:55.946439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.946477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.946504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:55.956284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.956320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.956331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:55.967257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.967290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.967317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:55.978273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.978306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.978317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:55.987742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.987775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.987786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:55.996936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:55.996968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:55.996995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:56.008368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:56.008402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:56.008413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:56.017753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:56.017785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:56.017812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:56.028544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:56.028577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:56.028588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:56.038788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:56.038819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:56.038846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:56.049090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:56.049122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.183 [2024-11-20 05:50:56.049149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.183 [2024-11-20 05:50:56.059874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.183 [2024-11-20 05:50:56.059908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.184 [2024-11-20 05:50:56.059936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.184 [2024-11-20 05:50:56.069741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.184 [2024-11-20 05:50:56.069772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.184 [2024-11-20 05:50:56.069799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.184 [2024-11-20 05:50:56.079873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.184 [2024-11-20 05:50:56.079906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.184 [2024-11-20 05:50:56.079933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.184 [2024-11-20 05:50:56.089995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.184 [2024-11-20 05:50:56.090026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.184 [2024-11-20 05:50:56.090052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.443 [2024-11-20 05:50:56.100198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.443 [2024-11-20 05:50:56.100232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.443 [2024-11-20 05:50:56.100243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.443 [2024-11-20 05:50:56.111007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.443 [2024-11-20 05:50:56.111038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.443 [2024-11-20 05:50:56.111065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.443 [2024-11-20 05:50:56.119968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.443 [2024-11-20 05:50:56.120001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.443 [2024-11-20 05:50:56.120012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.443 [2024-11-20 05:50:56.130768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.443 [2024-11-20 05:50:56.130801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.443 [2024-11-20 05:50:56.130813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.443 [2024-11-20 05:50:56.142094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.443 [2024-11-20 05:50:56.142130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.443 [2024-11-20 05:50:56.142142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.443 [2024-11-20 05:50:56.152791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.443 [2024-11-20 05:50:56.152824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.443 [2024-11-20 05:50:56.152835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.443 [2024-11-20 05:50:56.163317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.443 [2024-11-20 05:50:56.163360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.443 [2024-11-20 05:50:56.163371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.443 [2024-11-20 05:50:56.173916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.443 [2024-11-20 05:50:56.173947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.443 [2024-11-20 05:50:56.173960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.443 [2024-11-20 05:50:56.185062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.443 [2024-11-20 05:50:56.185098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.185109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.195638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.195674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.195685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.206481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.206514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.206526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.216595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.216628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.216639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.228225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.228260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.228271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.239277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.239310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.239321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.249180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.249213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.249224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.260254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.260289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.260317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.270419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.270478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.270491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.281431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.281470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.281481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.292017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.292051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.292062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.302617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.302650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.302662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.313291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.313323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.313334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.323938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.323972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.323984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.334553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.334586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.334597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.345491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.345523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.345535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.444 [2024-11-20 05:50:56.355959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.444 [2024-11-20 05:50:56.355992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.444 [2024-11-20 05:50:56.356003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.703 [2024-11-20 05:50:56.366989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.703 [2024-11-20 05:50:56.367021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.703 [2024-11-20 05:50:56.367048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.703 [2024-11-20 05:50:56.375779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.703 [2024-11-20 05:50:56.375812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.703 [2024-11-20 05:50:56.375823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.703 [2024-11-20 05:50:56.386146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.703 [2024-11-20 05:50:56.386178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.703 [2024-11-20 05:50:56.386205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.703 [2024-11-20 05:50:56.396397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.703 [2024-11-20 05:50:56.396432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.703 [2024-11-20 05:50:56.396452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.703 [2024-11-20 05:50:56.405416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.703 [2024-11-20 05:50:56.405460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.703 [2024-11-20 05:50:56.405472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.416548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.416582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.416593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.427768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.427802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.427814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.438196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.438228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.438255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.448487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.448519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.448546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.457705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.457737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.457763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.467187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.467219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.467246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.478261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.478295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.478305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.488632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.488663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.488690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.498802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.498834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.498860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.508912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.508944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.508970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.519336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.519390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.519401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.529965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.529996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.530023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.540626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.540657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.540684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.551222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.551253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.551279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.561873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.561905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.561932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.571965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.571998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.572009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.582179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.582210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.582237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.592441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.592485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.592496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.602654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.602684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.602711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.704 [2024-11-20 05:50:56.612786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.704 [2024-11-20 05:50:56.612816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.704 [2024-11-20 05:50:56.612843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.964 [2024-11-20 05:50:56.622948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.964 [2024-11-20 05:50:56.622978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.964 [2024-11-20 05:50:56.623005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.964 [2024-11-20 05:50:56.633103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.964 [2024-11-20 05:50:56.633135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.964 [2024-11-20 05:50:56.633162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.964 [2024-11-20 05:50:56.643737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.964 [2024-11-20 05:50:56.643770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.964 [2024-11-20 05:50:56.643781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.964 [2024-11-20 05:50:56.654004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.964 [2024-11-20 05:50:56.654035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.964 [2024-11-20 05:50:56.654063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.964 [2024-11-20 05:50:56.664254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.964 [2024-11-20 05:50:56.664288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.964 [2024-11-20 05:50:56.664299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.964 [2024-11-20 05:50:56.674480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.964 [2024-11-20 05:50:56.674510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.964 [2024-11-20 05:50:56.674537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.964 [2024-11-20 05:50:56.685016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.964 [2024-11-20 05:50:56.685049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.964 [2024-11-20 05:50:56.685076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.695516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.695559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.695570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.704226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.704261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.704272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.714402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.714435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.714469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.724521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.724554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.724565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.734729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.734760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.734787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.744857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.744888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.744915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.755544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.755577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.755588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.766352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.766385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.766412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.776368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.776403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.776414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.786301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.786333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.786360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.796960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.796991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.797019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.807402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.807435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.807454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.816941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.816972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.816999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.827007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.827039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.827066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.837206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.837238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.837265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.847813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.847848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.847858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.858460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.858491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.858519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.869400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.869432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.869468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:36.965 [2024-11-20 05:50:56.878405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:36.965 [2024-11-20 05:50:56.878437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:36.965 [2024-11-20 05:50:56.878474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.225 [2024-11-20 05:50:56.889049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:37.225 [2024-11-20 05:50:56.889082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.225 [2024-11-20 05:50:56.889109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.225 [2024-11-20 05:50:56.899185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:37.225 [2024-11-20 05:50:56.899218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.225 [2024-11-20 05:50:56.899245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.225 24463.50 IOPS, 95.56 MiB/s [2024-11-20T05:50:57.144Z] [2024-11-20 05:50:56.911670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1200190) 00:39:37.225 [2024-11-20 05:50:56.911703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:37.225 [2024-11-20 05:50:56.911714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:37.225 00:39:37.225 Latency(us) 00:39:37.225 [2024-11-20T05:50:57.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:37.225 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:37.225 nvme0n1 : 2.00 24473.58 95.60 0.00 0.00 5224.95 2855.50 13544.11 00:39:37.225 [2024-11-20T05:50:57.144Z] =================================================================================================================== 00:39:37.225 [2024-11-20T05:50:57.144Z] Total : 24473.58 95.60 0.00 0.00 5224.95 2855.50 13544.11 00:39:37.225 { 00:39:37.225 "results": [ 00:39:37.225 { 00:39:37.225 "job": "nvme0n1", 00:39:37.225 "core_mask": "0x2", 00:39:37.225 "workload": "randread", 00:39:37.225 "status": "finished", 00:39:37.225 "queue_depth": 128, 00:39:37.225 "io_size": 4096, 00:39:37.225 "runtime": 2.004406, 00:39:37.225 "iops": 24473.584692921493, 00:39:37.225 "mibps": 95.59994020672458, 00:39:37.225 "io_failed": 0, 00:39:37.225 "io_timeout": 0, 00:39:37.225 "avg_latency_us": 5224.953193412643, 00:39:37.225 "min_latency_us": 2855.497142857143, 00:39:37.225 "max_latency_us": 13544.106666666667 00:39:37.225 } 00:39:37.225 ], 00:39:37.225 "core_count": 1 00:39:37.225 } 00:39:37.225 05:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:37.225 05:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:37.225 | .driver_specific 00:39:37.225 | .nvme_error 00:39:37.225 | .status_code 00:39:37.225 | .command_transient_transport_error' 00:39:37.225 05:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:37.225 05:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:37.225 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 192 > 0 )) 00:39:37.225 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94701 00:39:37.225 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 94701 ']' 00:39:37.225 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 94701 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94701 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94701' 00:39:37.484 killing process with pid 94701 00:39:37.484 Received shutdown signal, test time was about 2.000000 seconds 00:39:37.484 00:39:37.484 Latency(us) 00:39:37.484 [2024-11-20T05:50:57.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:37.484 [2024-11-20T05:50:57.403Z] =================================================================================================================== 00:39:37.484 [2024-11-20T05:50:57.403Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 94701 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 94701 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94790 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94790 /var/tmp/bperf.sock 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 94790 ']' 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:37.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:37.484 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:37.742 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:37.742 Zero copy mechanism will not be used. 00:39:37.742 [2024-11-20 05:50:57.402576] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:37.742 [2024-11-20 05:50:57.402674] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94790 ] 00:39:37.742 [2024-11-20 05:50:57.553910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.742 [2024-11-20 05:50:57.596933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:38.000 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:38.000 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:39:38.000 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:38.000 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:38.259 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:38.259 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:38.259 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:38.259 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:38.259 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:38.259 05:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:38.518 nvme0n1 00:39:38.518 05:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:39:38.518 05:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:38.518 05:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:38.518 05:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:38.518 05:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:38.518 05:50:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:38.518 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:38.518 Zero copy mechanism will not be used. 00:39:38.518 Running I/O for 2 seconds... 00:39:38.518 [2024-11-20 05:50:58.431988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.518 [2024-11-20 05:50:58.432040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.518 [2024-11-20 05:50:58.432054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.518 [2024-11-20 05:50:58.434880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.518 [2024-11-20 05:50:58.434913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.518 [2024-11-20 05:50:58.434925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.778 [2024-11-20 05:50:58.438315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.778 [2024-11-20 05:50:58.438351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.778 [2024-11-20 05:50:58.438363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.778 [2024-11-20 05:50:58.441992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.778 [2024-11-20 05:50:58.442030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.442042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.444939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.444978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.444989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.448367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.448405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.448417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.451308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.451348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.451360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.454379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.454414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.454425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.457323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.457360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.457372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.460297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.460331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.460342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.464043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.464079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.464090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.467672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.467709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.467720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.471296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.471330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.471365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.474730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.474764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.474791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.478269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.478303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.478315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.481839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.481874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.481901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.485366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.485403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.485430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.488968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.489005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.489033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.492509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.492540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.492567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.495924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.495959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.495986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.499196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.499232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.499244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.501897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.501931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.501958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.505250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.505287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.505298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.508775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.508810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.508821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.510844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.510878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.510905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.514494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.514526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.514537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.517884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.517919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.517930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.520549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.520594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.520605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.523536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.523572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.523583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.526167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.779 [2024-11-20 05:50:58.526201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.779 [2024-11-20 05:50:58.526212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.779 [2024-11-20 05:50:58.529268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.529301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.529311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.531951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.531987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.531999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.534971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.535003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.535014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.538298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.538334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.538345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.541601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.541635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.541645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.543773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.543810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.543821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.547570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.547608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.547619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.551129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.551164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.551175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.553423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.553484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.553496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.557325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.557360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.557371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.561027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.561062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.561073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.564804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.564839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.564850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.567520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.567559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.567570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.570634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.570667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.570677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.574393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.574429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.574440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.578092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.578127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.578138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.581811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.581846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.581857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.585404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.585440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.585464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.589095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.589130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.589141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.592882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.592918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.592929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.596676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.596711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.596722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.600364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.600402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.600414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.604080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.604118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.604129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.607602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.607638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.607649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.610388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.610421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.780 [2024-11-20 05:50:58.610432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.780 [2024-11-20 05:50:58.612996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.780 [2024-11-20 05:50:58.613029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.613056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.616610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.616654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.616681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.620212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.620249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.620260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.623732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.623768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.623779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.627086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.627119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.627146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.630609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.630641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.630667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.634407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.634469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.634481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.638137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.638170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.638181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.640325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.640361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.640372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.643835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.643872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.643884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.646914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.646947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.646958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.649683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.649717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.649728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.652911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.652948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.652959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.655589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.655623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.655634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.658790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.658822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.658849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.661621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.661653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.661680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.664179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.664217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.664229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.667798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.667834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.667844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.671057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.671089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.671116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.674557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.674587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.674614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.677152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.677188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.677199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.680499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.680529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.680541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.683308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.683363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.683374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.686048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.686082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.686092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.689799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.689834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.689860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:38.781 [2024-11-20 05:50:58.693545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:38.781 [2024-11-20 05:50:58.693579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:38.781 [2024-11-20 05:50:58.693590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.044 [2024-11-20 05:50:58.696158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.044 [2024-11-20 05:50:58.696193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.044 [2024-11-20 05:50:58.696220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.044 [2024-11-20 05:50:58.699262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.044 [2024-11-20 05:50:58.699296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.044 [2024-11-20 05:50:58.699307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.044 [2024-11-20 05:50:58.702771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.044 [2024-11-20 05:50:58.702804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.044 [2024-11-20 05:50:58.702830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.044 [2024-11-20 05:50:58.706250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.044 [2024-11-20 05:50:58.706287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.044 [2024-11-20 05:50:58.706315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.044 [2024-11-20 05:50:58.709515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.044 [2024-11-20 05:50:58.709546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.044 [2024-11-20 05:50:58.709557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.044 [2024-11-20 05:50:58.711924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.044 [2024-11-20 05:50:58.711960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.044 [2024-11-20 05:50:58.711971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.044 [2024-11-20 05:50:58.714845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.044 [2024-11-20 05:50:58.714878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.044 [2024-11-20 05:50:58.714904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.044 [2024-11-20 05:50:58.717999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.044 [2024-11-20 05:50:58.718035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.044 [2024-11-20 05:50:58.718046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.044 [2024-11-20 05:50:58.720859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.044 [2024-11-20 05:50:58.720893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.044 [2024-11-20 05:50:58.720920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.044 [2024-11-20 05:50:58.723878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.044 [2024-11-20 05:50:58.723915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.044 [2024-11-20 05:50:58.723926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.044 [2024-11-20 05:50:58.726646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.044 [2024-11-20 05:50:58.726680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.044 [2024-11-20 05:50:58.726691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.044 [2024-11-20 05:50:58.729834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.044 [2024-11-20 05:50:58.729868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.044 [2024-11-20 05:50:58.729895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.044 [2024-11-20 05:50:58.732413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.044 [2024-11-20 05:50:58.732457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.732468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.735287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.735322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.735333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.738296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.738328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.738355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.741710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.741745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.741771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.745352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.745387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.745413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.748902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.748936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.748962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.752416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.752465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.752476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.756024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.756059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.756086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.759741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.759778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.759805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.763174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.763205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.763232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.766808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.766842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.766853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.770278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.770311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.770338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.772971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.773007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.773017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.776260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.776298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.776309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.779881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.779917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.779944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.783581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.783616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.783628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.785924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.785958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.785986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.789738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.789775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.789786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.793432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.793491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.793503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.796187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.796222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.796233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.799398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.799432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.799454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.045 [2024-11-20 05:50:58.802996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.045 [2024-11-20 05:50:58.803033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.045 [2024-11-20 05:50:58.803043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.805631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.805667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.805694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.808966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.809002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.809029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.812631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.812665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.812692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.816239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.816278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.816289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.819689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.819724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.819751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.823284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.823318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.823329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.826906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.826942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.826952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.830373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.830407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.830434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.833831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.833865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.833891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.837234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.837266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.837293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.840803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.840836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.840863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.844414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.844475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.844486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.847838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.847873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.847900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.851396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.851428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.851455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.854783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.854815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.854841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.858176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.858209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.858236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.861877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.861915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.861942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.865634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.865669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.865680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.869234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.869268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.869294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.872991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.873025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.873051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.876484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.876518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.876545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.879638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.879672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.879682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.046 [2024-11-20 05:50:58.882707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.046 [2024-11-20 05:50:58.882737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.046 [2024-11-20 05:50:58.882763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.885060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.885093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.885120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.888629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.888662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.888689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.890809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.890841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.890867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.893902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.893935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.893962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.897735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.897769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.897795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.900381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.900416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.900427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.903642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.903677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.903688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.907321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.907360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.907387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.911175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.911208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.911235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.913932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.913962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.913972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.917211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.917248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.917259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.920992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.921029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.921040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.924636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.924671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.924682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.926745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.926775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.926802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.930412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.930471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.930482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.933717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.933749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.933776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.936074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.936111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.936122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.939716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.939752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.939779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.942359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.942392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.942419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.945623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.945657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.945683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.949313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.949347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.949374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.952981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.047 [2024-11-20 05:50:58.953015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.047 [2024-11-20 05:50:58.953042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.047 [2024-11-20 05:50:58.956686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.048 [2024-11-20 05:50:58.956721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.048 [2024-11-20 05:50:58.956748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.311 [2024-11-20 05:50:58.960145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.311 [2024-11-20 05:50:58.960183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.311 [2024-11-20 05:50:58.960194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.311 [2024-11-20 05:50:58.963573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.311 [2024-11-20 05:50:58.963608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.311 [2024-11-20 05:50:58.963620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.311 [2024-11-20 05:50:58.967070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.311 [2024-11-20 05:50:58.967105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.311 [2024-11-20 05:50:58.967116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.311 [2024-11-20 05:50:58.970612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.311 [2024-11-20 05:50:58.970647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.311 [2024-11-20 05:50:58.970658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.311 [2024-11-20 05:50:58.974177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.311 [2024-11-20 05:50:58.974210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.311 [2024-11-20 05:50:58.974237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.311 [2024-11-20 05:50:58.977802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.311 [2024-11-20 05:50:58.977836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.311 [2024-11-20 05:50:58.977862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.311 [2024-11-20 05:50:58.981066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.311 [2024-11-20 05:50:58.981099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.311 [2024-11-20 05:50:58.981126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.311 [2024-11-20 05:50:58.984627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.311 [2024-11-20 05:50:58.984660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.311 [2024-11-20 05:50:58.984686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.311 [2024-11-20 05:50:58.988081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.311 [2024-11-20 05:50:58.988118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.311 [2024-11-20 05:50:58.988128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.311 [2024-11-20 05:50:58.991729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.311 [2024-11-20 05:50:58.991765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.311 [2024-11-20 05:50:58.991776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.311 [2024-11-20 05:50:58.995151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.311 [2024-11-20 05:50:58.995184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.311 [2024-11-20 05:50:58.995210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:58.998799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:58.998831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:58.998859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.002052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.002084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.002110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.005501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.005534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.005545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.008221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.008257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.008267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.011586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.011623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.011634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.014892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.014924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.014951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.017281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.017318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.017345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.020417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.020476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.020500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.023654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.023691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.023702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.026113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.026144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.026171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.029124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.029159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.029170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.032601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.032636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.032647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.036096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.036132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.036143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.039573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.039607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.039634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.042667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.042698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.042725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.045206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.045243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.045270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.048924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.048962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.048972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.052697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.052734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.052745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.055399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.055431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.055453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.058664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.058696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.058722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.062421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.062482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.062493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.066011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.066044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.066071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.068172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.068209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.068220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.071899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.071937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.071964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.074506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.074536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.074562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.077752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.077787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.312 [2024-11-20 05:50:59.077798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.312 [2024-11-20 05:50:59.081597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.312 [2024-11-20 05:50:59.081631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.081657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.085438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.085487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.085498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.089097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.089133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.089144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.091248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.091280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.091290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.094922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.094955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.094982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.097532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.097565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.097592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.100763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.100800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.100811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.104420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.104466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.104477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.107925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.107961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.107973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.110356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.110386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.110413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.113904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.113938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.113964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.117414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.117467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.117495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.119914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.119949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.119960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.122793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.122826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.122852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.126389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.126423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.126449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.129085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.129121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.129132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.132219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.132256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.132267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.135206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.135238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.135265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.138009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.138043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.138054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.141027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.141065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.141076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.144162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.144198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.144209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.146529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.146570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.146597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.149633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.149667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.149693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.153159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.153197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.153208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.155814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.155847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.155858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.159018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.159050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.159061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.162860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.162893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.313 [2024-11-20 05:50:59.162920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.313 [2024-11-20 05:50:59.166762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.313 [2024-11-20 05:50:59.166795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.166822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.170089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.170124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.170135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.172343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.172381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.172392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.176225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.176265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.176276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.179743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.179779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.179790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.182991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.183025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.183036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.186621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.186656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.186666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.190191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.190226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.190253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.193660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.193694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.193720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.196965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.196998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.197024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.200493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.200527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.200538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.203604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.203637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.203664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.206179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.206211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.206237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.209782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.209827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.209838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.213606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.213639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.213665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.216890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.216922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.216950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.219105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.219136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.219147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.314 [2024-11-20 05:50:59.222604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.314 [2024-11-20 05:50:59.222636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.314 [2024-11-20 05:50:59.222663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.574 [2024-11-20 05:50:59.226290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.574 [2024-11-20 05:50:59.226324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.574 [2024-11-20 05:50:59.226334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.574 [2024-11-20 05:50:59.229796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.574 [2024-11-20 05:50:59.229830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.574 [2024-11-20 05:50:59.229857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.574 [2024-11-20 05:50:59.233310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.574 [2024-11-20 05:50:59.233344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.233370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.236981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.237015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.237042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.240685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.240719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.240746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.244240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.244277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.244288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.247673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.247709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.247721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.251199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.251232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.251258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.254801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.254834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.254845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.258382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.258415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.258442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.261799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.261831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.261858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.265114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.265147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.265173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.267821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.267858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.267869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.271458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.271501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.271528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.274012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.274043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.274069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.277171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.277206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.277216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.280855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.280890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.280901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.283545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.283580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.283607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.286048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.286080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.286106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.289345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.289379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.289390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.292860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.292893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.292920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.295040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.295074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.295085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.298658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.298694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.298705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.302078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.302114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.302141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.304476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.304510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.304521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.307777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.307815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.307826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.311160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.311194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.311205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.314651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.314683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.314694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.317040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.317074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.317086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.320984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.321019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.321030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.324936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.324970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.324998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.328534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.328578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.328588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.331353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.331403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.331414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.334491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.334522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.334533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.337129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.337163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.337189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.340656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.340691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.340717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.344475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.344535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.344546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.348096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.348134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.575 [2024-11-20 05:50:59.348161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.575 [2024-11-20 05:50:59.351336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.575 [2024-11-20 05:50:59.351394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.351405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.353748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.353780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.353807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.356956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.356989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.357015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.359442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.359489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.359516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.362418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.362476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.362487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.365757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.365789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.365816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.368338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.368375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.368387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.372138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.372176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.372203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.375863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.375900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.375911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.379562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.379598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.379609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.382221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.382255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.382266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.385467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.385497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.385508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.389048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.389085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.389096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.392792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.392828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.392839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.396238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.396273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.396285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.399934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.399970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.399982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.403553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.403587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.403598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.407066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.407100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.407111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.410621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.410651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.410662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.414154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.414190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.414201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.417663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.417697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.417709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.421354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.421390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.421401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.576 9363.00 IOPS, 1170.38 MiB/s [2024-11-20T05:50:59.495Z] [2024-11-20 05:50:59.425948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.425985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.425997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.429672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.429707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.429719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.433092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.433129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.433140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.436685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.436721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.436733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.440246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.576 [2024-11-20 05:50:59.440284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.576 [2024-11-20 05:50:59.440295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.576 [2024-11-20 05:50:59.443889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.577 [2024-11-20 05:50:59.443925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.577 [2024-11-20 05:50:59.443936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.577 [2024-11-20 05:50:59.447311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.577 [2024-11-20 05:50:59.447367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.577 [2024-11-20 05:50:59.447378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.577 [2024-11-20 05:50:59.450516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.577 [2024-11-20 05:50:59.450547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.577 [2024-11-20 05:50:59.450574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.577 [2024-11-20 05:50:59.454158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.577 [2024-11-20 05:50:59.454195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.577 [2024-11-20 05:50:59.454205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.577 [2024-11-20 05:50:59.457673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.577 [2024-11-20 05:50:59.457708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.577 [2024-11-20 05:50:59.457719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.577 [2024-11-20 05:50:59.461235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.577 [2024-11-20 05:50:59.461268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.577 [2024-11-20 05:50:59.461295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.577 [2024-11-20 05:50:59.464800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.577 [2024-11-20 05:50:59.464834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.577 [2024-11-20 05:50:59.464861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.577 [2024-11-20 05:50:59.468368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.577 [2024-11-20 05:50:59.468401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.577 [2024-11-20 05:50:59.468413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.577 [2024-11-20 05:50:59.471901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.577 [2024-11-20 05:50:59.471936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.577 [2024-11-20 05:50:59.471947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.577 [2024-11-20 05:50:59.475316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.577 [2024-11-20 05:50:59.475358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.577 [2024-11-20 05:50:59.475370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.577 [2024-11-20 05:50:59.478942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.577 [2024-11-20 05:50:59.478977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.577 [2024-11-20 05:50:59.478988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.577 [2024-11-20 05:50:59.482496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.577 [2024-11-20 05:50:59.482528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.577 [2024-11-20 05:50:59.482539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.577 [2024-11-20 05:50:59.486115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.577 [2024-11-20 05:50:59.486152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.577 [2024-11-20 05:50:59.486164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.577 [2024-11-20 05:50:59.489553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.577 [2024-11-20 05:50:59.489583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.577 [2024-11-20 05:50:59.489594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.837 [2024-11-20 05:50:59.493208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.837 [2024-11-20 05:50:59.493242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.837 [2024-11-20 05:50:59.493269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.837 [2024-11-20 05:50:59.496802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.837 [2024-11-20 05:50:59.496835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.837 [2024-11-20 05:50:59.496862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.837 [2024-11-20 05:50:59.500374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.837 [2024-11-20 05:50:59.500411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.837 [2024-11-20 05:50:59.500438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.837 [2024-11-20 05:50:59.504025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.837 [2024-11-20 05:50:59.504063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.837 [2024-11-20 05:50:59.504074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.837 [2024-11-20 05:50:59.507758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.837 [2024-11-20 05:50:59.507794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.837 [2024-11-20 05:50:59.507822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.510390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.510423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.510434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.513406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.513440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.513462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.516562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.516595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.516607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.519356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.519407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.519418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.522730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.522762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.522789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.525269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.525300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.525327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.528655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.528690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.528701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.531488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.531520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.531547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.534207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.534239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.534266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.537202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.537236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.537247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.540400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.540436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.540466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.542936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.542967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.542994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.546115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.546149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.546176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.548920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.548954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.548965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.551829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.551869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.551880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.555023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.555054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.555080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.557691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.557724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.557735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.560869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.560906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.560933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.563715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.563751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.563762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.566930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.566961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.566988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.569340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.569376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.569387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.572537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.572571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.572581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.575793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.575829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.575839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.578239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.578272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.578284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.581265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.581301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.581328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.584611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.584647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.838 [2024-11-20 05:50:59.584658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.838 [2024-11-20 05:50:59.588090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.838 [2024-11-20 05:50:59.588125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.588137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.590401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.590436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.590462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.593953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.593986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.594012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.597540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.597573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.597583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.601128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.601158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.601169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.603461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.603494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.603505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.606926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.606961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.606973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.610230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.610267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.610278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.612867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.612902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.612929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.616132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.616169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.616180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.618850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.618886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.618897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.621893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.621929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.621941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.625120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.625155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.625166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.628062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.628097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.628108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.631265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.631299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.631311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.633644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.633677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.633688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.636759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.636794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.636805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.640353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.640390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.640401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.642726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.642759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.642770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.645962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.645997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.646025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.649055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.649089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.649101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.652138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.652175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.652185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.655039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.655072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.655083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.658087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.658120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.658131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.661579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.661612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.661623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.663779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.663816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.663827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.839 [2024-11-20 05:50:59.667065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.839 [2024-11-20 05:50:59.667099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.839 [2024-11-20 05:50:59.667110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.670826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.670861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.670888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.674202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.674236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.674263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.676543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.676575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.676586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.679919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.679957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.679968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.682468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.682499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.682509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.685720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.685756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.685783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.689499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.689532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.689559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.693371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.693406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.693417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.696004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.696041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.696053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.699174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.699207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.699218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.702950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.702984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.703011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.705627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.705659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.705670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.708740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.708775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.708802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.712109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.712147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.712158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.714541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.714573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.714584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.717543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.717575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.717602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.720762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.720798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.720808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.722994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.723026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.723036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.725963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.725998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.726025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.729080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.729115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.729126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.732182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.732217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.732228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.734786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.734818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.734845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.737888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.737922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.737933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.740753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.740787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.840 [2024-11-20 05:50:59.740798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:39.840 [2024-11-20 05:50:59.743470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.840 [2024-11-20 05:50:59.743501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.841 [2024-11-20 05:50:59.743512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:39.841 [2024-11-20 05:50:59.746181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.841 [2024-11-20 05:50:59.746215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.841 [2024-11-20 05:50:59.746242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:39.841 [2024-11-20 05:50:59.748905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.841 [2024-11-20 05:50:59.748941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.841 [2024-11-20 05:50:59.748952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:39.841 [2024-11-20 05:50:59.752427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:39.841 [2024-11-20 05:50:59.752482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:39.841 [2024-11-20 05:50:59.752510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.102 [2024-11-20 05:50:59.756070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.102 [2024-11-20 05:50:59.756108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.756119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.758662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.758694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.758721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.761942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.761976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.761987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.765638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.765676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.765687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.768355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.768390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.768401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.771584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.771618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.771645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.775156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.775189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.775201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.778791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.778825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.778836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.782466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.782514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.782541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.785980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.786016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.786043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.789548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.789581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.789609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.793097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.793132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.793158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.796621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.796654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.796681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.800021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.800060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.800071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.802241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.802273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.802299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.805751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.805786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.805814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.808488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.808522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.808549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.811654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.811702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.811713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.814565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.814596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.814622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.817005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.817039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.817066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.820833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.820867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.820878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.823312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.823368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.823395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.826573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.826604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.826631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.830415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.830461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.830472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.103 [2024-11-20 05:50:59.834089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.103 [2024-11-20 05:50:59.834123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.103 [2024-11-20 05:50:59.834150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.836864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.836895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.836906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.840109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.840147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.840158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.843752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.843788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.843800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.847314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.847370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.847382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.850960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.850994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.851020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.854535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.854567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.854594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.857897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.857930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.857956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.861471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.861517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.861528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.865052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.865085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.865112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.868389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.868437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.868460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.870618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.870648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.870674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.873876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.873910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.873936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.877572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.877606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.877633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.881048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.881080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.881107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.883149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.883182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.883193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.886761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.886795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.886821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.890485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.890517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.890544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.892970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.893003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.893014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.895907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.895943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.895971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.899019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.899051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.899078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.901782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.104 [2024-11-20 05:50:59.901817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.104 [2024-11-20 05:50:59.901827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.104 [2024-11-20 05:50:59.904235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.904271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.904282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.907594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.907630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.907641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.911321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.911360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.911388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.915127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.915160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.915187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.917821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.917852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.917879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.921064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.921101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.921112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.924818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.924855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.924866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.928559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.928594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.928605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.931109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.931140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.931166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.934469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.934501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.934511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.937984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.938018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.938045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.940580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.940614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.940626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.943713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.943748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.943759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.947419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.947464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.947475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.951222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.951255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.951281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.953960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.953990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.954001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.957185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.957221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.957232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.960792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.960829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.960840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.964314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.964350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.964360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.966549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.966580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.966591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.970369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.970405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.970416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.974194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.105 [2024-11-20 05:50:59.974231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.105 [2024-11-20 05:50:59.974242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.105 [2024-11-20 05:50:59.977987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.106 [2024-11-20 05:50:59.978024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.106 [2024-11-20 05:50:59.978034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.106 [2024-11-20 05:50:59.980661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.106 [2024-11-20 05:50:59.980692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.106 [2024-11-20 05:50:59.980719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.106 [2024-11-20 05:50:59.983923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.106 [2024-11-20 05:50:59.983961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.106 [2024-11-20 05:50:59.983971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.106 [2024-11-20 05:50:59.987558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.106 [2024-11-20 05:50:59.987593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.106 [2024-11-20 05:50:59.987604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.106 [2024-11-20 05:50:59.991299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.106 [2024-11-20 05:50:59.991335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.106 [2024-11-20 05:50:59.991354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.106 [2024-11-20 05:50:59.993992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.106 [2024-11-20 05:50:59.994022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.106 [2024-11-20 05:50:59.994048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.106 [2024-11-20 05:50:59.997244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.106 [2024-11-20 05:50:59.997278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.106 [2024-11-20 05:50:59.997289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.106 [2024-11-20 05:51:00.000963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.106 [2024-11-20 05:51:00.000998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.106 [2024-11-20 05:51:00.001025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.106 [2024-11-20 05:51:00.004889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.106 [2024-11-20 05:51:00.004939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.106 [2024-11-20 05:51:00.004951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.106 [2024-11-20 05:51:00.008933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.106 [2024-11-20 05:51:00.008971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.106 [2024-11-20 05:51:00.008999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.106 [2024-11-20 05:51:00.012758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.106 [2024-11-20 05:51:00.012795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.106 [2024-11-20 05:51:00.012806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.106 [2024-11-20 05:51:00.016202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.106 [2024-11-20 05:51:00.016238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.106 [2024-11-20 05:51:00.016249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.018453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.367 [2024-11-20 05:51:00.018495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.367 [2024-11-20 05:51:00.018522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.021785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.367 [2024-11-20 05:51:00.021819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.367 [2024-11-20 05:51:00.021830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.024482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.367 [2024-11-20 05:51:00.024513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.367 [2024-11-20 05:51:00.024524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.027677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.367 [2024-11-20 05:51:00.027731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.367 [2024-11-20 05:51:00.027744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.030873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.367 [2024-11-20 05:51:00.030910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.367 [2024-11-20 05:51:00.030922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.033853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.367 [2024-11-20 05:51:00.033888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.367 [2024-11-20 05:51:00.033899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.036952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.367 [2024-11-20 05:51:00.036985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.367 [2024-11-20 05:51:00.037013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.039705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.367 [2024-11-20 05:51:00.039742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.367 [2024-11-20 05:51:00.039753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.042491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.367 [2024-11-20 05:51:00.042517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.367 [2024-11-20 05:51:00.042528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.045632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.367 [2024-11-20 05:51:00.045666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.367 [2024-11-20 05:51:00.045693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.048046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.367 [2024-11-20 05:51:00.048082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.367 [2024-11-20 05:51:00.048094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.051033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.367 [2024-11-20 05:51:00.051065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.367 [2024-11-20 05:51:00.051092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.054148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.367 [2024-11-20 05:51:00.054180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.367 [2024-11-20 05:51:00.054206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.057057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.367 [2024-11-20 05:51:00.057092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.367 [2024-11-20 05:51:00.057103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.060200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.367 [2024-11-20 05:51:00.060237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.367 [2024-11-20 05:51:00.060248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.367 [2024-11-20 05:51:00.062663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.062695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.062706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.066015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.066050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.066077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.069695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.069728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.069755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.072342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.072378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.072389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.075578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.075612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.075623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.079239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.079272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.079299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.081775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.081805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.081816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.085062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.085098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.085110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.088657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.088693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.088704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.092258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.092295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.092306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.095688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.095723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.095735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.099124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.099156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.099182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.102745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.102779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.102790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.106290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.106323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.106350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.109986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.110024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.110035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.113636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.113681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.113691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.117029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.117064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.117075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.119965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.120001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.120013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.122289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.122319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.122346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.125629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.125662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.125689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.129528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.129564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.129574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.133334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.133372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.133383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.135979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.136014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.136041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.139209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.139243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.139270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.142936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.142969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.142995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.145359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.145394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.145405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.148547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.368 [2024-11-20 05:51:00.148581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.368 [2024-11-20 05:51:00.148592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.368 [2024-11-20 05:51:00.152263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.152300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.152311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.155675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.155709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.155720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.157999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.158032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.158043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.161843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.161877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.161904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.165536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.165570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.165581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.169026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.169062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.169073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.171192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.171224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.171235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.174971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.175005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.175032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.178630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.178662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.178689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.180748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.180782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.180792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.184403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.184440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.184462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.188140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.188176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.188203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.191733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.191769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.191780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.193789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.193824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.193851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.197434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.197493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.197504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.201173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.201207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.201233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.204736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.204769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.204780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.207146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.207179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.207206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.210715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.210751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.210762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.214432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.214490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.214502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.217866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.217901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.217912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.220055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.220089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.220099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.223760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.223798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.223809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.227516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.227560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.227571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.231232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.231265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.231292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.233854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.233885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.369 [2024-11-20 05:51:00.233896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.369 [2024-11-20 05:51:00.236958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.369 [2024-11-20 05:51:00.236992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.370 [2024-11-20 05:51:00.237018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.370 [2024-11-20 05:51:00.240547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.370 [2024-11-20 05:51:00.240580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.370 [2024-11-20 05:51:00.240591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.370 [2024-11-20 05:51:00.244116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.370 [2024-11-20 05:51:00.244153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.370 [2024-11-20 05:51:00.244164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.370 [2024-11-20 05:51:00.247541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.370 [2024-11-20 05:51:00.247574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.370 [2024-11-20 05:51:00.247602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.370 [2024-11-20 05:51:00.251041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.370 [2024-11-20 05:51:00.251073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.370 [2024-11-20 05:51:00.251100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.370 [2024-11-20 05:51:00.254714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.370 [2024-11-20 05:51:00.254748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.370 [2024-11-20 05:51:00.254774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.370 [2024-11-20 05:51:00.258320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.370 [2024-11-20 05:51:00.258354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.370 [2024-11-20 05:51:00.258380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.370 [2024-11-20 05:51:00.261778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.370 [2024-11-20 05:51:00.261810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.370 [2024-11-20 05:51:00.261837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.370 [2024-11-20 05:51:00.265342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.370 [2024-11-20 05:51:00.265379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.370 [2024-11-20 05:51:00.265390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.370 [2024-11-20 05:51:00.268663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.370 [2024-11-20 05:51:00.268698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.370 [2024-11-20 05:51:00.268708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.370 [2024-11-20 05:51:00.272332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.370 [2024-11-20 05:51:00.272369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.370 [2024-11-20 05:51:00.272381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.370 [2024-11-20 05:51:00.274898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.370 [2024-11-20 05:51:00.274927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.370 [2024-11-20 05:51:00.274954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.370 [2024-11-20 05:51:00.277854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.370 [2024-11-20 05:51:00.277887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.370 [2024-11-20 05:51:00.277914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.370 [2024-11-20 05:51:00.281109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.370 [2024-11-20 05:51:00.281146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.370 [2024-11-20 05:51:00.281157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.630 [2024-11-20 05:51:00.283599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.630 [2024-11-20 05:51:00.283633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.630 [2024-11-20 05:51:00.283660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.630 [2024-11-20 05:51:00.286347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.630 [2024-11-20 05:51:00.286379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.630 [2024-11-20 05:51:00.286406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.630 [2024-11-20 05:51:00.289739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.630 [2024-11-20 05:51:00.289772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.630 [2024-11-20 05:51:00.289798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.630 [2024-11-20 05:51:00.293242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.630 [2024-11-20 05:51:00.293277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.630 [2024-11-20 05:51:00.293289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.630 [2024-11-20 05:51:00.295472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.630 [2024-11-20 05:51:00.295504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.630 [2024-11-20 05:51:00.295514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.630 [2024-11-20 05:51:00.298961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.630 [2024-11-20 05:51:00.298994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.630 [2024-11-20 05:51:00.299021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.630 [2024-11-20 05:51:00.302520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.630 [2024-11-20 05:51:00.302552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.630 [2024-11-20 05:51:00.302563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.630 [2024-11-20 05:51:00.304894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.630 [2024-11-20 05:51:00.304929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.630 [2024-11-20 05:51:00.304940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.630 [2024-11-20 05:51:00.308654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.630 [2024-11-20 05:51:00.308691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.630 [2024-11-20 05:51:00.308702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.630 [2024-11-20 05:51:00.312121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.630 [2024-11-20 05:51:00.312155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.630 [2024-11-20 05:51:00.312166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.630 [2024-11-20 05:51:00.314301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.630 [2024-11-20 05:51:00.314332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.630 [2024-11-20 05:51:00.314343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.630 [2024-11-20 05:51:00.317807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.630 [2024-11-20 05:51:00.317843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.630 [2024-11-20 05:51:00.317871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.630 [2024-11-20 05:51:00.321276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.630 [2024-11-20 05:51:00.321313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.630 [2024-11-20 05:51:00.321323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.630 [2024-11-20 05:51:00.324850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.630 [2024-11-20 05:51:00.324886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.630 [2024-11-20 05:51:00.324897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.328424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.328467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.328477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.332081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.332116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.332127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.335794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.335830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.335841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.339320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.339359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.339386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.342729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.342761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.342787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.346081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.346117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.346127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.349370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.349402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.349413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.352670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.352703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.352729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.355139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.355173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.355201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.358263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.358294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.358321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.361303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.361338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.361349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.363729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.363766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.363777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.366511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.366541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.366567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.369489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.369521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.369532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.372912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.372946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.372973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.376625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.376658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.376684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.379781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.379816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.379827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.383358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.383408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.383435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.386655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.386685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.386712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.390276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.390310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.390337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.393884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.393917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.393944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.397292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.397325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.397352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.401000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.401033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.401060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.404304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.404339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.404350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.407910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.407947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.407957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.411582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.631 [2024-11-20 05:51:00.411615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.631 [2024-11-20 05:51:00.411626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:40.631 [2024-11-20 05:51:00.415001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.632 [2024-11-20 05:51:00.415035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.632 [2024-11-20 05:51:00.415046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:40.632 [2024-11-20 05:51:00.418597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.632 [2024-11-20 05:51:00.418631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.632 [2024-11-20 05:51:00.418642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:40.632 [2024-11-20 05:51:00.421099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1692e20) 00:39:40.632 [2024-11-20 05:51:00.421134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:40.632 [2024-11-20 05:51:00.421145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:40.632 9487.50 IOPS, 1185.94 MiB/s 00:39:40.632 Latency(us) 00:39:40.632 [2024-11-20T05:51:00.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.632 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:39:40.632 nvme0n1 : 2.00 9483.85 1185.48 0.00 0.00 1684.30 503.22 12795.12 00:39:40.632 [2024-11-20T05:51:00.551Z] =================================================================================================================== 00:39:40.632 [2024-11-20T05:51:00.551Z] Total : 9483.85 1185.48 0.00 0.00 1684.30 503.22 12795.12 00:39:40.632 { 00:39:40.632 "results": [ 00:39:40.632 { 00:39:40.632 "job": "nvme0n1", 00:39:40.632 "core_mask": "0x2", 00:39:40.632 "workload": "randread", 00:39:40.632 "status": "finished", 00:39:40.632 "queue_depth": 16, 00:39:40.632 "io_size": 131072, 00:39:40.632 "runtime": 2.002457, 00:39:40.632 "iops": 9483.849091391226, 00:39:40.632 "mibps": 1185.4811364239033, 00:39:40.632 "io_failed": 0, 00:39:40.632 "io_timeout": 0, 00:39:40.632 "avg_latency_us": 1684.2955027820196, 00:39:40.632 "min_latency_us": 503.22285714285715, 00:39:40.632 "max_latency_us": 12795.12380952381 00:39:40.632 } 00:39:40.632 ], 00:39:40.632 "core_count": 1 00:39:40.632 } 00:39:40.632 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:40.632 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:40.632 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:40.632 | .driver_specific 00:39:40.632 | .nvme_error 00:39:40.632 | .status_code 00:39:40.632 | .command_transient_transport_error' 00:39:40.632 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:40.890 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 613 > 0 )) 00:39:40.890 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94790 00:39:40.890 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 94790 ']' 00:39:40.890 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 94790 00:39:40.890 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:39:40.890 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:40.890 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94790 00:39:40.890 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:40.890 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:40.890 killing process with pid 94790 00:39:40.890 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94790' 00:39:40.890 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 94790 00:39:40.890 Received shutdown signal, test time was about 2.000000 seconds 00:39:40.890 00:39:40.890 Latency(us) 00:39:40.890 [2024-11-20T05:51:00.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.890 [2024-11-20T05:51:00.810Z] =================================================================================================================== 00:39:40.891 [2024-11-20T05:51:00.810Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:40.891 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 94790 00:39:41.149 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:39:41.149 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:41.149 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:39:41.149 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:39:41.149 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:39:41.149 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94862 00:39:41.149 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94862 /var/tmp/bperf.sock 00:39:41.149 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:39:41.149 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 94862 ']' 00:39:41.149 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:41.149 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:41.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:41.149 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:41.149 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:41.149 05:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:41.149 [2024-11-20 05:51:00.917375] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:41.149 [2024-11-20 05:51:00.917473] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94862 ] 00:39:41.149 [2024-11-20 05:51:01.060077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:41.408 [2024-11-20 05:51:01.103421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:41.408 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:41.408 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:39:41.408 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:41.408 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:41.666 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:41.667 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.667 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:41.667 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.667 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:41.667 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:41.924 nvme0n1 00:39:41.924 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:39:41.924 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:41.924 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:41.924 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:41.925 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:41.925 05:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:42.190 Running I/O for 2 seconds... 00:39:42.190 [2024-11-20 05:51:01.958592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef1430 00:39:42.190 [2024-11-20 05:51:01.959593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.190 [2024-11-20 05:51:01.959628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:42.190 [2024-11-20 05:51:01.969668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016edfdc0 00:39:42.190 [2024-11-20 05:51:01.971126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.190 [2024-11-20 05:51:01.971159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:39:42.190 [2024-11-20 05:51:01.976194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee4140 00:39:42.190 [2024-11-20 05:51:01.976930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.190 [2024-11-20 05:51:01.976961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:39:42.190 [2024-11-20 05:51:01.987243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef57b0 00:39:42.190 [2024-11-20 05:51:01.988493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.190 [2024-11-20 05:51:01.988525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:39:42.190 [2024-11-20 05:51:01.995802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef7970 00:39:42.190 [2024-11-20 05:51:01.996786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.190 [2024-11-20 05:51:01.996819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:42.190 [2024-11-20 05:51:02.004813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eef270 00:39:42.190 [2024-11-20 05:51:02.005841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.190 [2024-11-20 05:51:02.005871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:42.190 [2024-11-20 05:51:02.013992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee6b70 00:39:42.190 [2024-11-20 05:51:02.014660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.190 [2024-11-20 05:51:02.014691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:42.190 [2024-11-20 05:51:02.022819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee9168 00:39:42.190 [2024-11-20 05:51:02.023401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.190 [2024-11-20 05:51:02.023432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:39:42.190 [2024-11-20 05:51:02.033241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee7818 00:39:42.190 [2024-11-20 05:51:02.034396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.190 [2024-11-20 05:51:02.034425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:39:42.190 [2024-11-20 05:51:02.041823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ede038 00:39:42.190 [2024-11-20 05:51:02.042863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.190 [2024-11-20 05:51:02.042894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:39:42.190 [2024-11-20 05:51:02.050541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eff3c8 00:39:42.190 [2024-11-20 05:51:02.051460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.191 [2024-11-20 05:51:02.051491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:42.191 [2024-11-20 05:51:02.059059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef2510 00:39:42.191 [2024-11-20 05:51:02.059878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.191 [2024-11-20 05:51:02.059909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:42.191 [2024-11-20 05:51:02.067784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eed4e8 00:39:42.191 [2024-11-20 05:51:02.068485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.191 [2024-11-20 05:51:02.068516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:42.191 [2024-11-20 05:51:02.076395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee8088 00:39:42.191 [2024-11-20 05:51:02.076968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.191 [2024-11-20 05:51:02.076998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:42.191 [2024-11-20 05:51:02.087845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef5378 00:39:42.191 [2024-11-20 05:51:02.089152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.191 [2024-11-20 05:51:02.089183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.191 [2024-11-20 05:51:02.096611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee4578 00:39:42.191 [2024-11-20 05:51:02.097807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.191 [2024-11-20 05:51:02.097837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.105315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efeb58 00:39:42.513 [2024-11-20 05:51:02.106386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.106416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.114089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eebb98 00:39:42.513 [2024-11-20 05:51:02.115052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.115083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.122752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee88f8 00:39:42.513 [2024-11-20 05:51:02.123582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.123613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.131400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee38d0 00:39:42.513 [2024-11-20 05:51:02.132116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.132151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.140053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee6300 00:39:42.513 [2024-11-20 05:51:02.140642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.140672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.150551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef6020 00:39:42.513 [2024-11-20 05:51:02.151274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.151305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.158966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee5220 00:39:42.513 [2024-11-20 05:51:02.159624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.159654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.167281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee95a0 00:39:42.513 [2024-11-20 05:51:02.167790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.167821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.177691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ede470 00:39:42.513 [2024-11-20 05:51:02.178798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.178830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.186084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eea680 00:39:42.513 [2024-11-20 05:51:02.187084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.187115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.194544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016edfdc0 00:39:42.513 [2024-11-20 05:51:02.195415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.195454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.202933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016edf550 00:39:42.513 [2024-11-20 05:51:02.203708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.203739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.211496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef5378 00:39:42.513 [2024-11-20 05:51:02.212111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.212141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.222919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eed920 00:39:42.513 [2024-11-20 05:51:02.224420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.224459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.229350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efda78 00:39:42.513 [2024-11-20 05:51:02.230120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.230149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.238377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eefae0 00:39:42.513 [2024-11-20 05:51:02.239151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.239181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.246980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee7c50 00:39:42.513 [2024-11-20 05:51:02.247657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.247688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.257457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee38d0 00:39:42.513 [2024-11-20 05:51:02.258248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.258279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.265942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef1868 00:39:42.513 [2024-11-20 05:51:02.266604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.513 [2024-11-20 05:51:02.266634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:39:42.513 [2024-11-20 05:51:02.274384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efc128 00:39:42.513 [2024-11-20 05:51:02.274966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.274996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.284567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016edece0 00:39:42.514 [2024-11-20 05:51:02.285718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.285748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.293097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee0a68 00:39:42.514 [2024-11-20 05:51:02.294121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.294153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.301616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ede470 00:39:42.514 [2024-11-20 05:51:02.302547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.302579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.310090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee2c28 00:39:42.514 [2024-11-20 05:51:02.310893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.310924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.318619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eff3c8 00:39:42.514 [2024-11-20 05:51:02.319321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.319358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.329872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016edf988 00:39:42.514 [2024-11-20 05:51:02.331303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.331335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.338445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efe2e8 00:39:42.514 [2024-11-20 05:51:02.339762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.339793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.346950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef81e0 00:39:42.514 [2024-11-20 05:51:02.348151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.348182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.355422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ede8a8 00:39:42.514 [2024-11-20 05:51:02.356480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.356506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.363923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee3d08 00:39:42.514 [2024-11-20 05:51:02.364876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.364906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.372436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efc128 00:39:42.514 [2024-11-20 05:51:02.373261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.373291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.381007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef96f8 00:39:42.514 [2024-11-20 05:51:02.381721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.381751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.389492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef6cc8 00:39:42.514 [2024-11-20 05:51:02.390070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.390100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.400056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016edf118 00:39:42.514 [2024-11-20 05:51:02.400770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.400801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.408576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef7538 00:39:42.514 [2024-11-20 05:51:02.409211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.409241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:42.514 [2024-11-20 05:51:02.419185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee5220 00:39:42.514 [2024-11-20 05:51:02.420647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.514 [2024-11-20 05:51:02.420676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:39:42.781 [2024-11-20 05:51:02.425568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef0bc0 00:39:42.781 [2024-11-20 05:51:02.426270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.781 [2024-11-20 05:51:02.426299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:39:42.781 [2024-11-20 05:51:02.436275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eedd58 00:39:42.781 [2024-11-20 05:51:02.437497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.781 [2024-11-20 05:51:02.437526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:39:42.781 [2024-11-20 05:51:02.444613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef4298 00:39:42.781 [2024-11-20 05:51:02.445580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.781 [2024-11-20 05:51:02.445611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:39:42.781 [2024-11-20 05:51:02.453289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eea680 00:39:42.781 [2024-11-20 05:51:02.454298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.781 [2024-11-20 05:51:02.454327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:39:42.781 [2024-11-20 05:51:02.463881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef0350 00:39:42.781 [2024-11-20 05:51:02.465371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.465398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.470382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016edece0 00:39:42.782 [2024-11-20 05:51:02.471152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.471178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.481390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef9f68 00:39:42.782 [2024-11-20 05:51:02.482665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.482695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.489910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef20d8 00:39:42.782 [2024-11-20 05:51:02.490905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.490936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.498868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eeb328 00:39:42.782 [2024-11-20 05:51:02.499921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.499952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.507328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee7818 00:39:42.782 [2024-11-20 05:51:02.508133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.508165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.515998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efb048 00:39:42.782 [2024-11-20 05:51:02.516833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.516863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.526679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016edf118 00:39:42.782 [2024-11-20 05:51:02.528035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.528066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.533189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef4298 00:39:42.782 [2024-11-20 05:51:02.533811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.533839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.543852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eeaef0 00:39:42.782 [2024-11-20 05:51:02.544969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.544999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.552118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eea248 00:39:42.782 [2024-11-20 05:51:02.552982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.553013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.560821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef7da8 00:39:42.782 [2024-11-20 05:51:02.561719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.561749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.571508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efa7d8 00:39:42.782 [2024-11-20 05:51:02.572899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.572929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.577871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efa7d8 00:39:42.782 [2024-11-20 05:51:02.578560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.578590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.588508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef7da8 00:39:42.782 [2024-11-20 05:51:02.589679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.589710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.596783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef0bc0 00:39:42.782 [2024-11-20 05:51:02.597695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.597726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.605503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eeaef0 00:39:42.782 [2024-11-20 05:51:02.606469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.606498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.616269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef4298 00:39:42.782 [2024-11-20 05:51:02.617734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.617763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.622724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016edf118 00:39:42.782 [2024-11-20 05:51:02.623482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.623513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.633395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efb048 00:39:42.782 [2024-11-20 05:51:02.634662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.634690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.641724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016edece0 00:39:42.782 [2024-11-20 05:51:02.642709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.642739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.650441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eeb328 00:39:42.782 [2024-11-20 05:51:02.651477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.651508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.661144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef20d8 00:39:42.782 [2024-11-20 05:51:02.662683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.662711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.667512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef9f68 00:39:42.782 [2024-11-20 05:51:02.668295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.782 [2024-11-20 05:51:02.668323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:39:42.782 [2024-11-20 05:51:02.678327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee7818 00:39:42.782 [2024-11-20 05:51:02.679653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.783 [2024-11-20 05:51:02.679683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:42.783 [2024-11-20 05:51:02.684752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee0ea0 00:39:42.783 [2024-11-20 05:51:02.685337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.783 [2024-11-20 05:51:02.685366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:39:42.783 [2024-11-20 05:51:02.695495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eea680 00:39:42.783 [2024-11-20 05:51:02.696578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:42.783 [2024-11-20 05:51:02.696609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.703839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef4298 00:39:43.042 [2024-11-20 05:51:02.704651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.704682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.712700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eedd58 00:39:43.042 [2024-11-20 05:51:02.713574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.713602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.723346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eea248 00:39:43.042 [2024-11-20 05:51:02.724735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.724765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.729744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee8088 00:39:43.042 [2024-11-20 05:51:02.730387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.730417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.740388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee6b70 00:39:43.042 [2024-11-20 05:51:02.741537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.741566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.748740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efa7d8 00:39:43.042 [2024-11-20 05:51:02.749625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.749657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.757467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee6300 00:39:43.042 [2024-11-20 05:51:02.758377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.758406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.768106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef0bc0 00:39:43.042 [2024-11-20 05:51:02.769529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.769557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.774530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef6020 00:39:43.042 [2024-11-20 05:51:02.775234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.775262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.785219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eee5c8 00:39:43.042 [2024-11-20 05:51:02.786417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.786457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.793532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016edf118 00:39:43.042 [2024-11-20 05:51:02.794453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.794494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.802282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee73e0 00:39:43.042 [2024-11-20 05:51:02.803275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.803304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.813046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016edece0 00:39:43.042 [2024-11-20 05:51:02.814544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.814571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.819434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee12d8 00:39:43.042 [2024-11-20 05:51:02.820202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.820231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.830105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef20d8 00:39:43.042 [2024-11-20 05:51:02.831386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.831416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.838491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef9f68 00:39:43.042 [2024-11-20 05:51:02.839519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.839552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.847146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef8618 00:39:43.042 [2024-11-20 05:51:02.848206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.848237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.855396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee0ea0 00:39:43.042 [2024-11-20 05:51:02.856181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.856213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.864177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016edfdc0 00:39:43.042 [2024-11-20 05:51:02.865033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.865062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.874898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef4298 00:39:43.042 [2024-11-20 05:51:02.876236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.876267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.881284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efdeb0 00:39:43.042 [2024-11-20 05:51:02.881899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.881929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.891935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eebb98 00:39:43.042 [2024-11-20 05:51:02.893034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.893065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.900244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee8088 00:39:43.042 [2024-11-20 05:51:02.901085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.901116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.908949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efc560 00:39:43.042 [2024-11-20 05:51:02.909844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.909874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.919627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efa7d8 00:39:43.042 [2024-11-20 05:51:02.921009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.921038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.925954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ede470 00:39:43.042 [2024-11-20 05:51:02.926628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.926658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.936491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efd208 00:39:43.042 [2024-11-20 05:51:02.937661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.937691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:39:43.042 [2024-11-20 05:51:02.944894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef6020 00:39:43.042 [2024-11-20 05:51:02.945820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.945851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:39:43.042 28304.00 IOPS, 110.56 MiB/s [2024-11-20T05:51:02.961Z] [2024-11-20 05:51:02.953757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eefae0 00:39:43.042 [2024-11-20 05:51:02.954679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.042 [2024-11-20 05:51:02.954709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:43.301 [2024-11-20 05:51:02.963493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee9e10 00:39:43.301 [2024-11-20 05:51:02.964689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.301 [2024-11-20 05:51:02.964721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:39:43.301 [2024-11-20 05:51:02.971866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee6738 00:39:43.301 [2024-11-20 05:51:02.972790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.301 [2024-11-20 05:51:02.972822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:39:43.301 [2024-11-20 05:51:02.980573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef6458 00:39:43.301 [2024-11-20 05:51:02.981565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.301 [2024-11-20 05:51:02.981595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:43.301 [2024-11-20 05:51:02.991103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee3d08 00:39:43.301 [2024-11-20 05:51:02.992607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.301 [2024-11-20 05:51:02.992635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:39:43.301 [2024-11-20 05:51:02.997516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee7c50 00:39:43.301 [2024-11-20 05:51:02.998259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.301 [2024-11-20 05:51:02.998289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:39:43.301 [2024-11-20 05:51:03.008152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efb480 00:39:43.301 [2024-11-20 05:51:03.009404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.301 [2024-11-20 05:51:03.009435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:39:43.301 [2024-11-20 05:51:03.016070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efcdd0 00:39:43.301 [2024-11-20 05:51:03.017514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.301 [2024-11-20 05:51:03.017544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:39:43.301 [2024-11-20 05:51:03.025756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efb048 00:39:43.302 [2024-11-20 05:51:03.026565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.026598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.034222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016edf118 00:39:43.302 [2024-11-20 05:51:03.034879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.034910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.042771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eebb98 00:39:43.302 [2024-11-20 05:51:03.043352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.043382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.053123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eeb328 00:39:43.302 [2024-11-20 05:51:03.054293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.054323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.061660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef0ff8 00:39:43.302 [2024-11-20 05:51:03.062702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.062732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.070034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eecc78 00:39:43.302 [2024-11-20 05:51:03.070970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.071001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.078406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee1710 00:39:43.302 [2024-11-20 05:51:03.079215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.079246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.086833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef3e60 00:39:43.302 [2024-11-20 05:51:03.087543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.087574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.095225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef7538 00:39:43.302 [2024-11-20 05:51:03.095817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.095847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.106474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef57b0 00:39:43.302 [2024-11-20 05:51:03.107790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.107820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.114977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efda78 00:39:43.302 [2024-11-20 05:51:03.116186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.116216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.123590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee73e0 00:39:43.302 [2024-11-20 05:51:03.124648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.124678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.130790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ede470 00:39:43.302 [2024-11-20 05:51:03.131383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.131414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.141674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eed920 00:39:43.302 [2024-11-20 05:51:03.142369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.142401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.150383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef3a28 00:39:43.302 [2024-11-20 05:51:03.150985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.151017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.159118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee0630 00:39:43.302 [2024-11-20 05:51:03.159594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.159624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.169595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee23b8 00:39:43.302 [2024-11-20 05:51:03.170666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.170697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.178320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef4b08 00:39:43.302 [2024-11-20 05:51:03.179278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.179309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.187046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee4578 00:39:43.302 [2024-11-20 05:51:03.187904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.187935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.195805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee8088 00:39:43.302 [2024-11-20 05:51:03.196526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.196556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.204485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eed920 00:39:43.302 [2024-11-20 05:51:03.205089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.205119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:43.302 [2024-11-20 05:51:03.215979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef7970 00:39:43.302 [2024-11-20 05:51:03.217319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.302 [2024-11-20 05:51:03.217351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:39:43.561 [2024-11-20 05:51:03.223331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eed920 00:39:43.561 [2024-11-20 05:51:03.224208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.561 [2024-11-20 05:51:03.224238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:43.561 [2024-11-20 05:51:03.232038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efc560 00:39:43.561 [2024-11-20 05:51:03.232768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.561 [2024-11-20 05:51:03.232800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:39:43.561 [2024-11-20 05:51:03.240662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef1ca0 00:39:43.561 [2024-11-20 05:51:03.241280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.561 [2024-11-20 05:51:03.241309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:39:43.561 [2024-11-20 05:51:03.251357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef7970 00:39:43.561 [2024-11-20 05:51:03.252111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.561 [2024-11-20 05:51:03.252144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:39:43.561 [2024-11-20 05:51:03.259990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efc560 00:39:43.561 [2024-11-20 05:51:03.260633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.561 [2024-11-20 05:51:03.260662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:39:43.561 [2024-11-20 05:51:03.268707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eeea00 00:39:43.561 [2024-11-20 05:51:03.269234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.561 [2024-11-20 05:51:03.269264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:39:43.561 [2024-11-20 05:51:03.279027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee99d8 00:39:43.561 [2024-11-20 05:51:03.280168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.561 [2024-11-20 05:51:03.280199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:39:43.561 [2024-11-20 05:51:03.287345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee3d08 00:39:43.561 [2024-11-20 05:51:03.288337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.561 [2024-11-20 05:51:03.288370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:39:43.561 [2024-11-20 05:51:03.296194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef96f8 00:39:43.561 [2024-11-20 05:51:03.297204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.561 [2024-11-20 05:51:03.297234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:39:43.561 [2024-11-20 05:51:03.306813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efbcf0 00:39:43.562 [2024-11-20 05:51:03.308328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.308359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.313360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef7100 00:39:43.562 [2024-11-20 05:51:03.314168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.314197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.323974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef20d8 00:39:43.562 [2024-11-20 05:51:03.325262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.325292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.330300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee84c0 00:39:43.562 [2024-11-20 05:51:03.330884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.330914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.341116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efeb58 00:39:43.562 [2024-11-20 05:51:03.342201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.342233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.349432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efda78 00:39:43.562 [2024-11-20 05:51:03.350267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.350301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.358108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef31b8 00:39:43.562 [2024-11-20 05:51:03.358974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.359004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.368651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee1f80 00:39:43.562 [2024-11-20 05:51:03.370009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.370040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.374958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efd640 00:39:43.562 [2024-11-20 05:51:03.375612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.375642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.385737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee9168 00:39:43.562 [2024-11-20 05:51:03.386879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.386910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.394068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eea248 00:39:43.562 [2024-11-20 05:51:03.394955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.394986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.402749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef46d0 00:39:43.562 [2024-11-20 05:51:03.403682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.403714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.413323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee4140 00:39:43.562 [2024-11-20 05:51:03.414751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.414779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.419622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efb8b8 00:39:43.562 [2024-11-20 05:51:03.420313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.420342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.430186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee1710 00:39:43.562 [2024-11-20 05:51:03.431397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.431427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.438473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee27f0 00:39:43.562 [2024-11-20 05:51:03.439459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.439489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.447039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ede8a8 00:39:43.562 [2024-11-20 05:51:03.448030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.448061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.457585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee3d08 00:39:43.562 [2024-11-20 05:51:03.459067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.459094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.464027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eebfd0 00:39:43.562 [2024-11-20 05:51:03.464786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.464814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:39:43.562 [2024-11-20 05:51:03.474691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efb480 00:39:43.562 [2024-11-20 05:51:03.475963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.562 [2024-11-20 05:51:03.475995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:39:43.821 [2024-11-20 05:51:03.482992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef7100 00:39:43.821 [2024-11-20 05:51:03.484009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.821 [2024-11-20 05:51:03.484041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:39:43.821 [2024-11-20 05:51:03.491712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eea680 00:39:43.821 [2024-11-20 05:51:03.492732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.821 [2024-11-20 05:51:03.492760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:43.821 [2024-11-20 05:51:03.500216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee84c0 00:39:43.821 [2024-11-20 05:51:03.500988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.821 [2024-11-20 05:51:03.501019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:43.821 [2024-11-20 05:51:03.509516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee0a68 00:39:43.822 [2024-11-20 05:51:03.510262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.510290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.518482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eeb760 00:39:43.822 [2024-11-20 05:51:03.519269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.519297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.529498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efe2e8 00:39:43.822 [2024-11-20 05:51:03.530794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.530938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.536163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef6458 00:39:43.822 [2024-11-20 05:51:03.536767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.536790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.547148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efc128 00:39:43.822 [2024-11-20 05:51:03.548254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.548285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.556096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efac10 00:39:43.822 [2024-11-20 05:51:03.556826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.556856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.564794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eebb98 00:39:43.822 [2024-11-20 05:51:03.565436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.565477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.573467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef5378 00:39:43.822 [2024-11-20 05:51:03.573927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.573949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.583865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef20d8 00:39:43.822 [2024-11-20 05:51:03.584957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.585091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.593181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef1ca0 00:39:43.822 [2024-11-20 05:51:03.594417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.594574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.602049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efef90 00:39:43.822 [2024-11-20 05:51:03.603031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.603176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.611060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eefae0 00:39:43.822 [2024-11-20 05:51:03.612200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.612359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.622223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016edf118 00:39:43.822 [2024-11-20 05:51:03.623881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.624029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.629092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef4f40 00:39:43.822 [2024-11-20 05:51:03.630001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.630143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.640070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef0788 00:39:43.822 [2024-11-20 05:51:03.641487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.641630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.648841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee73e0 00:39:43.822 [2024-11-20 05:51:03.649857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.650006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.657768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef4298 00:39:43.822 [2024-11-20 05:51:03.658958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.659102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.666423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef8a50 00:39:43.822 [2024-11-20 05:51:03.667236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.667389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.675401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ede038 00:39:43.822 [2024-11-20 05:51:03.676378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.676543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.686531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee5658 00:39:43.822 [2024-11-20 05:51:03.688005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.688037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.693057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efef90 00:39:43.822 [2024-11-20 05:51:03.693656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.693694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.703722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef2510 00:39:43.822 [2024-11-20 05:51:03.704827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.704857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.712002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eee5c8 00:39:43.822 [2024-11-20 05:51:03.712854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.712884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.720767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef7970 00:39:43.822 [2024-11-20 05:51:03.721669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.721809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:39:43.822 [2024-11-20 05:51:03.731704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eec840 00:39:43.822 [2024-11-20 05:51:03.733090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:43.822 [2024-11-20 05:51:03.733230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:39:44.082 [2024-11-20 05:51:03.738334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eec840 00:39:44.082 [2024-11-20 05:51:03.739145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.082 [2024-11-20 05:51:03.739172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:44.082 [2024-11-20 05:51:03.747529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee0ea0 00:39:44.082 [2024-11-20 05:51:03.748193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.082 [2024-11-20 05:51:03.748262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:39:44.082 [2024-11-20 05:51:03.756067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee2c28 00:39:44.082 [2024-11-20 05:51:03.756618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.082 [2024-11-20 05:51:03.756658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:39:44.082 [2024-11-20 05:51:03.766529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee73e0 00:39:44.082 [2024-11-20 05:51:03.767244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.082 [2024-11-20 05:51:03.767318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:39:44.082 [2024-11-20 05:51:03.775084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef46d0 00:39:44.082 [2024-11-20 05:51:03.775675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.082 [2024-11-20 05:51:03.775698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:39:44.082 [2024-11-20 05:51:03.783514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efcdd0 00:39:44.082 [2024-11-20 05:51:03.783968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.082 [2024-11-20 05:51:03.783989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:44.082 [2024-11-20 05:51:03.793705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef0788 00:39:44.082 [2024-11-20 05:51:03.794908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.082 [2024-11-20 05:51:03.794938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:39:44.082 [2024-11-20 05:51:03.802383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee6fa8 00:39:44.082 [2024-11-20 05:51:03.803358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.082 [2024-11-20 05:51:03.803387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:39:44.082 [2024-11-20 05:51:03.810656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efb480 00:39:44.082 [2024-11-20 05:51:03.811465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.082 [2024-11-20 05:51:03.811495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:39:44.083 [2024-11-20 05:51:03.819356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef6cc8 00:39:44.083 [2024-11-20 05:51:03.820227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.083 [2024-11-20 05:51:03.820255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:39:44.083 [2024-11-20 05:51:03.830109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efd208 00:39:44.083 [2024-11-20 05:51:03.831468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.083 [2024-11-20 05:51:03.831496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:39:44.083 [2024-11-20 05:51:03.836496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee84c0 00:39:44.083 [2024-11-20 05:51:03.837122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.083 [2024-11-20 05:51:03.837150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:39:44.083 [2024-11-20 05:51:03.847213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee3498 00:39:44.083 [2024-11-20 05:51:03.848373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.083 [2024-11-20 05:51:03.848403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:39:44.083 [2024-11-20 05:51:03.855694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efe2e8 00:39:44.083 [2024-11-20 05:51:03.856552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.083 [2024-11-20 05:51:03.856582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:39:44.083 [2024-11-20 05:51:03.864396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee7818 00:39:44.083 [2024-11-20 05:51:03.865312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.083 [2024-11-20 05:51:03.865342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:39:44.083 [2024-11-20 05:51:03.875102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eff3c8 00:39:44.083 [2024-11-20 05:51:03.876659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.083 [2024-11-20 05:51:03.876682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:39:44.083 [2024-11-20 05:51:03.881662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efda78 00:39:44.083 [2024-11-20 05:51:03.882469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.083 [2024-11-20 05:51:03.882499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:44.083 [2024-11-20 05:51:03.890877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef92c0 00:39:44.083 [2024-11-20 05:51:03.891581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.083 [2024-11-20 05:51:03.891609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:39:44.083 [2024-11-20 05:51:03.899454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee7818 00:39:44.083 [2024-11-20 05:51:03.900041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.083 [2024-11-20 05:51:03.900063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:39:44.083 [2024-11-20 05:51:03.910275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef3e60 00:39:44.083 [2024-11-20 05:51:03.911040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.083 [2024-11-20 05:51:03.911071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:39:44.083 [2024-11-20 05:51:03.918846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ee4140 00:39:44.083 [2024-11-20 05:51:03.919410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.083 [2024-11-20 05:51:03.919448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:39:44.083 [2024-11-20 05:51:03.927347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016efb048 00:39:44.083 [2024-11-20 05:51:03.927866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.083 [2024-11-20 05:51:03.927891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:39:44.083 [2024-11-20 05:51:03.937610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016eebfd0 00:39:44.083 [2024-11-20 05:51:03.938678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.083 [2024-11-20 05:51:03.938708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:44.083 [2024-11-20 05:51:03.946091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2ced0) with pdu=0x200016ef35f0 00:39:44.083 [2024-11-20 05:51:03.947684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:39:44.083 [2024-11-20 05:51:03.947714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:39:44.083 28282.50 IOPS, 110.48 MiB/s 00:39:44.083 Latency(us) 00:39:44.083 [2024-11-20T05:51:04.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:44.083 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:44.083 nvme0n1 : 2.00 28292.38 110.52 0.00 0.00 4519.17 1888.06 12108.56 00:39:44.083 [2024-11-20T05:51:04.002Z] =================================================================================================================== 00:39:44.083 [2024-11-20T05:51:04.002Z] Total : 28292.38 110.52 0.00 0.00 4519.17 1888.06 12108.56 00:39:44.083 { 00:39:44.083 "results": [ 00:39:44.083 { 00:39:44.083 "job": "nvme0n1", 00:39:44.083 "core_mask": "0x2", 00:39:44.083 "workload": "randwrite", 00:39:44.083 "status": "finished", 00:39:44.083 "queue_depth": 128, 00:39:44.083 "io_size": 4096, 00:39:44.083 "runtime": 2.003826, 00:39:44.083 "iops": 28292.376683404647, 00:39:44.083 "mibps": 110.5170964195494, 00:39:44.083 "io_failed": 0, 00:39:44.083 "io_timeout": 0, 00:39:44.083 "avg_latency_us": 4519.171279363456, 00:39:44.083 "min_latency_us": 1888.0609523809524, 00:39:44.083 "max_latency_us": 12108.55619047619 00:39:44.083 } 00:39:44.083 ], 00:39:44.083 "core_count": 1 00:39:44.083 } 00:39:44.083 05:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:44.083 05:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:44.083 05:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:44.083 | .driver_specific 00:39:44.083 | .nvme_error 00:39:44.083 | .status_code 00:39:44.083 | .command_transient_transport_error' 00:39:44.083 05:51:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:44.342 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 222 > 0 )) 00:39:44.342 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94862 00:39:44.342 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 94862 ']' 00:39:44.342 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 94862 00:39:44.342 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:39:44.342 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:44.342 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94862 00:39:44.342 killing process with pid 94862 00:39:44.342 Received shutdown signal, test time was about 2.000000 seconds 00:39:44.342 00:39:44.342 Latency(us) 00:39:44.342 [2024-11-20T05:51:04.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:44.342 [2024-11-20T05:51:04.261Z] =================================================================================================================== 00:39:44.342 [2024-11-20T05:51:04.261Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:44.342 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:44.342 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:44.342 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94862' 00:39:44.343 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 94862 00:39:44.343 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 94862 00:39:44.601 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:39:44.601 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:39:44.601 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:39:44.601 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:39:44.601 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:39:44.601 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94933 00:39:44.601 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:39:44.601 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94933 /var/tmp/bperf.sock 00:39:44.601 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 94933 ']' 00:39:44.601 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:44.601 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:44.601 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:44.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:44.601 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:44.601 05:51:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:44.601 [2024-11-20 05:51:04.445725] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:44.601 [2024-11-20 05:51:04.446001] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:39:44.601 Zero copy mechanism will not be used. 00:39:44.601 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94933 ] 00:39:44.859 [2024-11-20 05:51:04.589336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:44.859 [2024-11-20 05:51:04.631770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:45.426 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:45.426 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:39:45.426 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:45.426 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:39:45.684 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:39:45.684 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.684 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:45.684 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.684 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:45.684 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:39:45.942 nvme0n1 00:39:45.942 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:39:45.942 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:45.942 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:45.942 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:45.942 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:39:45.942 05:51:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:46.202 I/O size of 131072 is greater than zero copy threshold (65536). 00:39:46.202 Zero copy mechanism will not be used. 00:39:46.202 Running I/O for 2 seconds... 00:39:46.202 [2024-11-20 05:51:05.916492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.202 [2024-11-20 05:51:05.916608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.916635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.920546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.920700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.920738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.924292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.924418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.924439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.927998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.928122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.928141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.931695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.931818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.931837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.935575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.935722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.935742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.939271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.939442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.939471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.942974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.943131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.943150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.946722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.946860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.946879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.950485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.950630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.950649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.954172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.954304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.954323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.957945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.958062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.958080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.961621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.961742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.961761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.965271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.965412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.965432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.969020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.969157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.969176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.972727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.972884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.972903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.976514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.976683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.976703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.980266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.980410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.980430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.983982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.984128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.984147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.987680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.987838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.987857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.991296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.991478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.991498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.995017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.995165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.995184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:05.998757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:05.998916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:05.998935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:06.002429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:06.002586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:06.002605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:06.006095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:06.006228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:06.006248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:06.009781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:06.009863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:06.009882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:06.013482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.203 [2024-11-20 05:51:06.013629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.203 [2024-11-20 05:51:06.013648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.203 [2024-11-20 05:51:06.017100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.017256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.017274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.020893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.021042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.021061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.024654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.024812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.024830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.028399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.028567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.028586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.032131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.032265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.032284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.035849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.035993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.036012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.039608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.039737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.039756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.043277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.043456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.043487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.047000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.047138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.047157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.050763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.050905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.050924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.054397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.054533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.054552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.058052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.058191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.058210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.061719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.061852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.061871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.065379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.065541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.065561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.069055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.069192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.069211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.072767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.072924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.072943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.076495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.076633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.076652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.080154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.080301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.080320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.083844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.084001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.084020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.087518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.087634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.087654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.091200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.091353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.091387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.094937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.095094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.095113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.098613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.098750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.098769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.102245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.102377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.102396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.105921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.106083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.106102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.109626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.109775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.109794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.204 [2024-11-20 05:51:06.113296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.204 [2024-11-20 05:51:06.113417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.204 [2024-11-20 05:51:06.113436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.205 [2024-11-20 05:51:06.117036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.205 [2024-11-20 05:51:06.117169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.205 [2024-11-20 05:51:06.117188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.120747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.120917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.120936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.124491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.124628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.124647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.128162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.128322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.128340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.131873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.132019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.132037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.135576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.135704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.135723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.139196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.139379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.139399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.142897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.143034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.143054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.146637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.146780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.146799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.150226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.150368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.150387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.153831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.153973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.153992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.157495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.157654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.157673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.161139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.161276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.161295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.164845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.165002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.165021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.168580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.168744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.168764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.172236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.172379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.172397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.175950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.176092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.465 [2024-11-20 05:51:06.176111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.465 [2024-11-20 05:51:06.179582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.465 [2024-11-20 05:51:06.179714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.179733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.183165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.183301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.183320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.186856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.187013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.187032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.190581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.190716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.190735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.194236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.194389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.194408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.197893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.198032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.198052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.201572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.201717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.201736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.205217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.205378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.205398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.208979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.209130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.209149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.212744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.212915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.212934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.216464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.216620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.216639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.220115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.220257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.220276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.223808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.223940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.223959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.227529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.227674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.227694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.231193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.231370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.231388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.234872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.235036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.235055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.238507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.238629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.238648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.242169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.242283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.242302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.245871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.246010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.246030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.249536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.249688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.249707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.253144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.253298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.253316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.256844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.257011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.257029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.260524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.260669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.260687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.264222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.264389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.264408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.267945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.268077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.268095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.271638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.271800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.271820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.275233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.466 [2024-11-20 05:51:06.275410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.466 [2024-11-20 05:51:06.275429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.466 [2024-11-20 05:51:06.278948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.279085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.279104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.282642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.282813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.282832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.286324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.286507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.286527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.289924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.290086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.290106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.293607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.293758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.293777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.297237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.297392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.297410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.300843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.301005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.301024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.304544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.304708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.304727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.308213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.308357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.308377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.311837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.311959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.311978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.315515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.315641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.315660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.319124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.319283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.319302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.322792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.322949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.322968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.326437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.326595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.326614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.330112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.330261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.330281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.333788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.333957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.333975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.337423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.337560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.337580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.341077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.341210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.341229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.344795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.344930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.344950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.348514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.348660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.348679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.352218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.352349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.352368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.355878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.355991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.356011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.359494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.359642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.359661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.363078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.363237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.363256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.366748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.366909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.366928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.370447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.370611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.370630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.374161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.467 [2024-11-20 05:51:06.374304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.467 [2024-11-20 05:51:06.374323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.467 [2024-11-20 05:51:06.377816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.468 [2024-11-20 05:51:06.377966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.468 [2024-11-20 05:51:06.377984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.381524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.381658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.381677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.385194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.385340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.385359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.388964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.389113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.389131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.392675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.392800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.392818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.396325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.396488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.396507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.399968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.400099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.400118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.403646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.403786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.403806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.407246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.407392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.407410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.410838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.410975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.410994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.414516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.414643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.414661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.418184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.418344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.418363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.421870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.421984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.422002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.425558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.425705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.425723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.429151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.429291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.429310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.432861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.433005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.433023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.436519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.436665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.436683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.440109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.440256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.440275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.443769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.443926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.728 [2024-11-20 05:51:06.443943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.728 [2024-11-20 05:51:06.447394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.728 [2024-11-20 05:51:06.447558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.447578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.451035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.451173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.451191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.454782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.454940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.454960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.458421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.458573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.458593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.462211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.462336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.462356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.465893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.466035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.466054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.469573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.469756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.469774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.473255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.473399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.473419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.477036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.477194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.477214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.480680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.480826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.480845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.484346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.484532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.484552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.488051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.488195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.488214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.491786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.491913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.491933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.495434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.495597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.495617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.499032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.499168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.499187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.502753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.502913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.502932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.506413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.506563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.506582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.510120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.510248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.510267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.513830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.513992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.514011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.517533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.517679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.517698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.521174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.521308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.521328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.524938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.525079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.525098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.729 [2024-11-20 05:51:06.528668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.729 [2024-11-20 05:51:06.528809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.729 [2024-11-20 05:51:06.528828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.532345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.532525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.532545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.536033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.536168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.536187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.539704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.539828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.539847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.543287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.543454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.543474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.546952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.547115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.547134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.550594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.550754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.550772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.554248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.554386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.554405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.557953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.558095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.558115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.561652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.561812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.561830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.565392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.565549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.565568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.569088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.569240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.569258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.572822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.572964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.572984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.576467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.576610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.576629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.580213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.580353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.580372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.583938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.584096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.584116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.587676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.587828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.587848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.591527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.591667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.591687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.595252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.595401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.595421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.599077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.599235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.599254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.602875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.603012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.603032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.606615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.606759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.606784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.610321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.610462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.610486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.614133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.614261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.614281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.617849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.617979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.618004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.621534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.621682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.730 [2024-11-20 05:51:06.621706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.730 [2024-11-20 05:51:06.625261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.730 [2024-11-20 05:51:06.625406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.731 [2024-11-20 05:51:06.625430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.731 [2024-11-20 05:51:06.629000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.731 [2024-11-20 05:51:06.629146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.731 [2024-11-20 05:51:06.629166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.731 [2024-11-20 05:51:06.632729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.731 [2024-11-20 05:51:06.632901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.731 [2024-11-20 05:51:06.632921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.731 [2024-11-20 05:51:06.636422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.731 [2024-11-20 05:51:06.636611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.731 [2024-11-20 05:51:06.636631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.731 [2024-11-20 05:51:06.640182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.731 [2024-11-20 05:51:06.640318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.731 [2024-11-20 05:51:06.640337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.643873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.644024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.644045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.647645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.647771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.647790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.651311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.651494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.651513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.655061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.655198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.655216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.658831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.658989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.659009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.662596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.662745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.662765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.666251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.666383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.666403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.669981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.670141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.670161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.673662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.673802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.673821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.677352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.677508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.677528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.681082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.681216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.681235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.684831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.684972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.684992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.688549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.688681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.688700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.692258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.692392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.692412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.695961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.696105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.696124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.699702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.699850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.699870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.703391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.703541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.703560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.707102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.707214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.707233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.710801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.710957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.710975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.714517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.991 [2024-11-20 05:51:06.714701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.991 [2024-11-20 05:51:06.714720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.991 [2024-11-20 05:51:06.718229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.718389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.718408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.721892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.722050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.722069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.725597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.725739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.725758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.729282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.729428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.729458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.733030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.733146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.733164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.736770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.736884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.736903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.740474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.740620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.740639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.744161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.744309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.744334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.747875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.748019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.748037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.751640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.751776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.751795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.755381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.755541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.755561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.759074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.759206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.759224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.762767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.762913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.762932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.766539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.766704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.766723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.770252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.770386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.770405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.773936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.774050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.774069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.777605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.777755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.777775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.781282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.781442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.781471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.785043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.785202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.785221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.788750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.788932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.788950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.792500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.792667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.792687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.796195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.796361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.796379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.799888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.800001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.992 [2024-11-20 05:51:06.800020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.992 [2024-11-20 05:51:06.803536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.992 [2024-11-20 05:51:06.803701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.803719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.807192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.807359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.807377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.810930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.811066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.811084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.814610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.814766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.814785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.818237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.818360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.818380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.821913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.822042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.822062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.825595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.825729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.825747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.829226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.829386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.829405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.832962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.833106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.833126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.836738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.836876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.836895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.840395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.840557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.840575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.844051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.844176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.844196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.847718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.847877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.847895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.851398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.851569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.851588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.855031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.855186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.855205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.858718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.858799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.858818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.862367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.862536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.862555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.866012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.866157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.866176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.869746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.869905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.869924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.873425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.873589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.873608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.877126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.877276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.877294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.880814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.880976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.880996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.884535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.884697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.884716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.993 [2024-11-20 05:51:06.888220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.993 [2024-11-20 05:51:06.888366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.993 [2024-11-20 05:51:06.888386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:46.994 [2024-11-20 05:51:06.891894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.994 [2024-11-20 05:51:06.892039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.994 [2024-11-20 05:51:06.892058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:46.994 [2024-11-20 05:51:06.895613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.994 [2024-11-20 05:51:06.895729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.994 [2024-11-20 05:51:06.895748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:46.994 [2024-11-20 05:51:06.899293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.994 [2024-11-20 05:51:06.899496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.994 [2024-11-20 05:51:06.899515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:46.994 [2024-11-20 05:51:06.902959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:46.994 [2024-11-20 05:51:06.903118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:46.994 [2024-11-20 05:51:06.903137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.906691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.906847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.906865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.910383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.910542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.910561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.914019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.914180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.914199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.256 8384.00 IOPS, 1048.00 MiB/s [2024-11-20T05:51:07.175Z] [2024-11-20 05:51:06.918916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.919085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.919104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.922641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.922777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.922796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.926336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.926497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.926516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.930045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.930201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.930219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.933722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.933856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.933876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.937360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.937534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.937553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.941058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.941196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.941215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.944829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.944966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.944984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.948548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.948712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.948731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.952230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.952305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.952325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.955933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.956072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.956091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.959600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.959741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.959760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.963249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.963440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.963470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.966951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.967111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.967129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.970658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.970817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.970836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.974328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.974484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.974520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.977972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.978136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.256 [2024-11-20 05:51:06.978154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.256 [2024-11-20 05:51:06.981620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.256 [2024-11-20 05:51:06.981784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:06.981803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:06.985251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:06.985389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:06.985408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:06.988969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:06.989124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:06.989143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:06.992722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:06.992821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:06.992841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:06.996413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:06.996566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:06.996585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.000138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.000284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.000304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.003845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.003976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.003995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.007528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.007679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.007698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.011163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.011312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.011330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.014841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.015002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.015021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.018526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.018647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.018666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.022189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.022319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.022338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.025897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.026011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.026030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.029545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.029690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.029708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.033190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.033349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.033367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.036871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.037019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.037038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.040593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.040752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.040771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.044299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.044439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.044468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.047959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.048092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.048110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.051630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.051794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.051813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.055274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.055421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.055439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.058987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.059156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.059175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.062627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.062781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.062800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.066270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.066403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.066421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.069983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.070144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.070163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.073635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.073781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.073800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.077282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.077419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.257 [2024-11-20 05:51:07.077437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.257 [2024-11-20 05:51:07.081006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.257 [2024-11-20 05:51:07.081144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.081164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.084663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.084821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.084840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.088386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.088588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.088607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.092110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.092248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.092266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.095815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.095948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.095967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.099488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.099626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.099645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.103132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.103280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.103299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.106804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.106972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.106990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.110516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.110661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.110680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.114165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.114314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.114333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.117828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.117942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.117961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.121517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.121639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.121657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.125173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.125320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.125339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.128916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.129054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.129073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.132678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.132839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.132858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.136325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.136505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.136524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.140064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.140210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.140229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.143812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.143960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.143979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.147561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.147719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.147737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.151128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.151283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.151301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.154838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.155007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.155025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.158499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.158644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.158663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.162201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.162364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.162383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.165870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.166028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.166047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.258 [2024-11-20 05:51:07.169558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.258 [2024-11-20 05:51:07.169721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.258 [2024-11-20 05:51:07.169740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.520 [2024-11-20 05:51:07.173221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.520 [2024-11-20 05:51:07.173370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.520 [2024-11-20 05:51:07.173388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.520 [2024-11-20 05:51:07.176939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.520 [2024-11-20 05:51:07.177082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.520 [2024-11-20 05:51:07.177103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.520 [2024-11-20 05:51:07.180703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.520 [2024-11-20 05:51:07.180860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.520 [2024-11-20 05:51:07.180879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.520 [2024-11-20 05:51:07.184494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.520 [2024-11-20 05:51:07.184640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.520 [2024-11-20 05:51:07.184659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.520 [2024-11-20 05:51:07.188202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.520 [2024-11-20 05:51:07.188334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.520 [2024-11-20 05:51:07.188353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.520 [2024-11-20 05:51:07.191904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.520 [2024-11-20 05:51:07.192031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.520 [2024-11-20 05:51:07.192050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.520 [2024-11-20 05:51:07.195530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.520 [2024-11-20 05:51:07.195678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.520 [2024-11-20 05:51:07.195697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.520 [2024-11-20 05:51:07.199135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.520 [2024-11-20 05:51:07.199293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.520 [2024-11-20 05:51:07.199312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.520 [2024-11-20 05:51:07.202778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.520 [2024-11-20 05:51:07.202949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.520 [2024-11-20 05:51:07.202967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.520 [2024-11-20 05:51:07.206426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.520 [2024-11-20 05:51:07.206581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.520 [2024-11-20 05:51:07.206600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.210103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.210237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.210256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.213732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.213890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.213909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.217381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.217539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.217558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.221008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.221185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.221204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.224715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.224874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.224893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.228483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.228640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.228660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.232168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.232297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.232315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.235927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.236069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.236089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.239604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.239752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.239772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.243307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.243489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.243508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.246976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.247110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.247129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.250646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.250791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.250810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.254238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.254386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.254405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.257871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.258027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.258046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.261503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.261648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.261667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.265179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.265328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.265347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.268942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.269098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.269116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.272682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.272836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.272855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.276318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.276500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.276519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.280028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.280172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.280192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.283750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.283875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.283894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.287481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.287628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.287648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.291122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.291278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.291297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.294803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.294951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.294969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.298422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.298614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.298633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.302055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.302200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.521 [2024-11-20 05:51:07.302219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.521 [2024-11-20 05:51:07.305765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.521 [2024-11-20 05:51:07.305914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.305933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.309369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.309520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.309538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.313022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.313179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.313197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.316758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.316919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.316937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.320420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.320572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.320591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.324169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.324332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.324351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.327869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.328027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.328045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.331558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.331705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.331723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.335149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.335294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.335313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.338799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.338954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.338972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.342417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.342593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.342611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.346075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.346233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.346252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.349735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.349878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.349896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.353402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.353586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.353605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.357084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.357232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.357251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.360792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.360941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.360960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.364408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.364567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.364586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.368077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.368211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.368231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.371716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.371858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.371877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.375372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.375570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.375590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.378968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.379112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.379131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.382620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.382766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.382785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.386154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.386310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.386330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.389835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.389975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.389994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.393483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.393621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.393640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.397155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.397314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.397333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.400876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.522 [2024-11-20 05:51:07.401036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.522 [2024-11-20 05:51:07.401054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.522 [2024-11-20 05:51:07.404570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.523 [2024-11-20 05:51:07.404751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.523 [2024-11-20 05:51:07.404770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.523 [2024-11-20 05:51:07.408232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.523 [2024-11-20 05:51:07.408366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.523 [2024-11-20 05:51:07.408385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.523 [2024-11-20 05:51:07.411881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.523 [2024-11-20 05:51:07.412023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.523 [2024-11-20 05:51:07.412043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.523 [2024-11-20 05:51:07.415521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.523 [2024-11-20 05:51:07.415657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.523 [2024-11-20 05:51:07.415676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.523 [2024-11-20 05:51:07.419060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.523 [2024-11-20 05:51:07.419215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.523 [2024-11-20 05:51:07.419234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.523 [2024-11-20 05:51:07.422729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.523 [2024-11-20 05:51:07.422865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.523 [2024-11-20 05:51:07.422883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.523 [2024-11-20 05:51:07.426380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.523 [2024-11-20 05:51:07.426530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.523 [2024-11-20 05:51:07.426549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.523 [2024-11-20 05:51:07.430028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.523 [2024-11-20 05:51:07.430166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.523 [2024-11-20 05:51:07.430184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.523 [2024-11-20 05:51:07.433738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.523 [2024-11-20 05:51:07.433896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.523 [2024-11-20 05:51:07.433915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.437376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.437528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.437547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.441104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.441255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.441273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.444835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.444972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.444991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.448524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.448708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.448727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.452167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.452338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.452357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.455859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.455981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.456000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.459502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.459665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.459684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.463116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.463276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.463294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.466788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.466916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.466935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.470427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.470600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.470618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.474031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.474199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.474218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.477713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.477872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.477891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.481386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.481512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.481531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.485107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.485238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.485257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.488843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.488982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.489001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.492498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.492645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.492664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.496167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.496317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.496336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.499846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.500018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.500037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.503563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.503693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.503712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.507112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.507277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.507296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.510816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.510978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.510996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.514517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.514674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.514693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.518137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.784 [2024-11-20 05:51:07.518291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.784 [2024-11-20 05:51:07.518309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.784 [2024-11-20 05:51:07.521813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.521967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.521986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.525483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.525658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.525676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.529193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.529337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.529362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.532960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.533106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.533131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.536666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.536788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.536807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.540313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.540466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.540485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.543994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.544135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.544154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.547611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.547752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.547771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.551267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.551418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.551437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.554873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.555019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.555038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.558537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.558685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.558704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.562220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.562362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.562381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.565866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.566003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.566021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.569536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.569683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.569701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.573200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.573314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.573333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.576941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.577072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.577090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.580658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.580803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.580822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.584331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.584498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.584517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.587996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.588119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.588139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.591621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.591768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.591787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.595239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.595419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.595438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.598969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.599130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.599149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.602661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.602809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.602828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.606403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.785 [2024-11-20 05:51:07.606535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.785 [2024-11-20 05:51:07.606554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.785 [2024-11-20 05:51:07.610128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.610245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.610270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.613850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.613978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.614002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.617794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.617957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.617977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.621534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.621703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.621722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.625227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.625386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.625404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.628917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.629063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.629083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.632642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.632780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.632799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.636359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.636487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.636507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.640089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.640204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.640224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.643800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.643949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.643968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.647465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.647610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.647630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.651102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.651252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.651271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.654793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.654929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.654947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.658436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.658584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.658603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.662060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.662219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.662238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.665722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.665888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.665906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.669389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.669564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.669584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.673067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.673215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.673234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.676992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.677138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.677157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.681320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.681491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.681509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.684992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.685131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.685150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.688730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.688897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.688916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.692517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.692680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.692700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:47.786 [2024-11-20 05:51:07.696187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:47.786 [2024-11-20 05:51:07.696354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:47.786 [2024-11-20 05:51:07.696372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.699949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.700075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.700095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.703756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.703891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.703910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.707490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.707621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.707641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.711155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.711313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.711333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.714906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.715060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.715079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.718603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.718750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.718769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.722305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.722452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.722483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.726025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.726177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.726196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.729691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.729836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.729854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.733334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.733485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.733503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.737064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.737224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.737243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.740806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.740968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.740987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.744530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.744693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.744712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.748203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.748345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.748363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.751909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.752056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.752075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.755585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.755773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.755792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.759258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.759412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.759430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.762952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.047 [2024-11-20 05:51:07.763088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.047 [2024-11-20 05:51:07.763107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:48.047 [2024-11-20 05:51:07.766735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.766892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.766911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.770513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.770649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.770669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.774253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.774391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.774409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.778018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.778144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.778164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.781797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.781951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.781970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.785494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.785628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.785647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.789205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.789362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.789381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.792965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.793103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.793123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.796711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.796863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.796882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.800434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.800592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.800611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.804101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.804271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.804291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.807846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.807990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.808008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.811591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.811725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.811744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.815276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.815458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.815477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.819024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.819142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.819161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.822695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.822824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.822844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.826376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.826620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.826640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.830258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.830471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.830491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.834106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.834309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.834328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.837936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.838117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.838136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.841804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.841969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.841988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.845511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.845661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.845680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.849226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.849432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.849451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.853116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.853341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.853361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.857022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.857230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.857250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.860927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.861119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.861138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:48.048 [2024-11-20 05:51:07.864803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.048 [2024-11-20 05:51:07.864937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.048 [2024-11-20 05:51:07.864957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:48.049 [2024-11-20 05:51:07.868640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.049 [2024-11-20 05:51:07.868802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.049 [2024-11-20 05:51:07.868821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:48.049 [2024-11-20 05:51:07.872376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.049 [2024-11-20 05:51:07.873925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.049 [2024-11-20 05:51:07.873944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:48.049 [2024-11-20 05:51:07.877690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.049 [2024-11-20 05:51:07.877818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.049 [2024-11-20 05:51:07.877837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:48.049 [2024-11-20 05:51:07.881430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.049 [2024-11-20 05:51:07.881625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.049 [2024-11-20 05:51:07.881644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:48.049 [2024-11-20 05:51:07.885298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.049 [2024-11-20 05:51:07.885539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.049 [2024-11-20 05:51:07.885559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:48.049 [2024-11-20 05:51:07.889209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.049 [2024-11-20 05:51:07.889432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.049 [2024-11-20 05:51:07.889451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:48.049 [2024-11-20 05:51:07.893106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.049 [2024-11-20 05:51:07.893313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.049 [2024-11-20 05:51:07.893333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:48.049 [2024-11-20 05:51:07.896961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.049 [2024-11-20 05:51:07.897166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.049 [2024-11-20 05:51:07.897185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:48.049 [2024-11-20 05:51:07.900816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.049 [2024-11-20 05:51:07.900958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.049 [2024-11-20 05:51:07.900977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:48.049 [2024-11-20 05:51:07.904577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.049 [2024-11-20 05:51:07.904707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.049 [2024-11-20 05:51:07.904726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:48.049 [2024-11-20 05:51:07.908232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.049 [2024-11-20 05:51:07.908458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.049 [2024-11-20 05:51:07.908477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:48.049 [2024-11-20 05:51:07.912125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.049 [2024-11-20 05:51:07.912335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.049 [2024-11-20 05:51:07.912355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:39:48.049 8369.50 IOPS, 1046.19 MiB/s [2024-11-20T05:51:07.968Z] [2024-11-20 05:51:07.917112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe2d210) with pdu=0x200016eff3c8 00:39:48.049 [2024-11-20 05:51:07.917239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:48.049 [2024-11-20 05:51:07.917258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:48.049 00:39:48.049 Latency(us) 00:39:48.049 [2024-11-20T05:51:07.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:48.049 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:39:48.049 nvme0n1 : 2.00 8364.64 1045.58 0.00 0.00 1909.37 1373.14 5305.30 00:39:48.049 [2024-11-20T05:51:07.968Z] =================================================================================================================== 00:39:48.049 [2024-11-20T05:51:07.968Z] Total : 8364.64 1045.58 0.00 0.00 1909.37 1373.14 5305.30 00:39:48.049 { 00:39:48.049 "results": [ 00:39:48.049 { 00:39:48.049 "job": "nvme0n1", 00:39:48.049 "core_mask": "0x2", 00:39:48.049 "workload": "randwrite", 00:39:48.049 "status": "finished", 00:39:48.049 "queue_depth": 16, 00:39:48.049 "io_size": 131072, 00:39:48.049 "runtime": 2.003075, 00:39:48.049 "iops": 8364.639366973279, 00:39:48.049 "mibps": 1045.5799208716599, 00:39:48.049 "io_failed": 0, 00:39:48.049 "io_timeout": 0, 00:39:48.049 "avg_latency_us": 1909.3677656989385, 00:39:48.049 "min_latency_us": 1373.135238095238, 00:39:48.049 "max_latency_us": 5305.295238095238 00:39:48.049 } 00:39:48.049 ], 00:39:48.049 "core_count": 1 00:39:48.049 } 00:39:48.049 05:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:39:48.049 05:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:39:48.049 | .driver_specific 00:39:48.049 | .nvme_error 00:39:48.049 | .status_code 00:39:48.049 | .command_transient_transport_error' 00:39:48.049 05:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:39:48.049 05:51:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 541 > 0 )) 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94933 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 94933 ']' 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 94933 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94933 00:39:48.616 killing process with pid 94933 00:39:48.616 Received shutdown signal, test time was about 2.000000 seconds 00:39:48.616 00:39:48.616 Latency(us) 00:39:48.616 [2024-11-20T05:51:08.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:48.616 [2024-11-20T05:51:08.535Z] =================================================================================================================== 00:39:48.616 [2024-11-20T05:51:08.535Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94933' 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 94933 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 94933 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94652 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 94652 ']' 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 94652 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94652 00:39:48.616 killing process with pid 94652 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94652' 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 94652 00:39:48.616 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 94652 00:39:48.875 00:39:48.875 real 0m16.639s 00:39:48.875 user 0m30.990s 00:39:48.875 sys 0m4.935s 00:39:48.875 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:48.875 ************************************ 00:39:48.875 END TEST nvmf_digest_error 00:39:48.875 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:39:48.875 ************************************ 00:39:48.875 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:39:48.875 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:39:48.875 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:48.875 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:39:48.875 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:48.875 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:39:48.875 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:48.875 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:48.875 rmmod nvme_tcp 00:39:48.875 rmmod nvme_fabrics 00:39:49.134 rmmod nvme_keyring 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:49.134 Process with pid 94652 is not found 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 94652 ']' 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 94652 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 94652 ']' 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 94652 00:39:49.134 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (94652) - No such process 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 94652 is not found' 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:39:49.134 05:51:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:49.134 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:49.134 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:39:49.134 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:49.134 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:49.134 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.392 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:39:49.392 ************************************ 00:39:49.392 END TEST nvmf_digest 00:39:49.392 ************************************ 00:39:49.392 00:39:49.392 real 0m33.246s 00:39:49.392 user 1m0.375s 00:39:49.392 sys 0m10.264s 00:39:49.392 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:49.392 05:51:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:49.392 05:51:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:39:49.392 05:51:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:39:49.392 05:51:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:39:49.392 05:51:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:39:49.392 05:51:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:49.392 05:51:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:49.392 ************************************ 00:39:49.392 START TEST nvmf_mdns_discovery 00:39:49.392 ************************************ 00:39:49.392 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:39:49.392 * Looking for test storage... 00:39:49.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:39:49.392 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:49.392 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:49.392 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:49.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.651 --rc genhtml_branch_coverage=1 00:39:49.651 --rc genhtml_function_coverage=1 00:39:49.651 --rc genhtml_legend=1 00:39:49.651 --rc geninfo_all_blocks=1 00:39:49.651 --rc geninfo_unexecuted_blocks=1 00:39:49.651 00:39:49.651 ' 00:39:49.651 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:49.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.651 --rc genhtml_branch_coverage=1 00:39:49.651 --rc genhtml_function_coverage=1 00:39:49.651 --rc genhtml_legend=1 00:39:49.651 --rc geninfo_all_blocks=1 00:39:49.651 --rc geninfo_unexecuted_blocks=1 00:39:49.651 00:39:49.651 ' 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:49.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.652 --rc genhtml_branch_coverage=1 00:39:49.652 --rc genhtml_function_coverage=1 00:39:49.652 --rc genhtml_legend=1 00:39:49.652 --rc geninfo_all_blocks=1 00:39:49.652 --rc geninfo_unexecuted_blocks=1 00:39:49.652 00:39:49.652 ' 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:49.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.652 --rc genhtml_branch_coverage=1 00:39:49.652 --rc genhtml_function_coverage=1 00:39:49.652 --rc genhtml_legend=1 00:39:49.652 --rc geninfo_all_blocks=1 00:39:49.652 --rc geninfo_unexecuted_blocks=1 00:39:49.652 00:39:49.652 ' 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:49.652 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:39:49.652 Cannot find device "nvmf_init_br" 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:39:49.652 Cannot find device "nvmf_init_br2" 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:39:49.652 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:39:49.652 Cannot find device "nvmf_tgt_br" 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:39:49.653 Cannot find device "nvmf_tgt_br2" 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:39:49.653 Cannot find device "nvmf_init_br" 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:39:49.653 Cannot find device "nvmf_init_br2" 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:39:49.653 Cannot find device "nvmf_tgt_br" 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:39:49.653 Cannot find device "nvmf_tgt_br2" 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:39:49.653 Cannot find device "nvmf_br" 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:39:49.653 Cannot find device "nvmf_init_if" 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:39:49.653 Cannot find device "nvmf_init_if2" 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:49.653 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:49.653 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:39:49.653 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:39:49.911 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:49.911 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:39:49.911 00:39:49.911 --- 10.0.0.3 ping statistics --- 00:39:49.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:49.911 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:39:49.911 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:39:49.911 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:39:49.911 00:39:49.911 --- 10.0.0.4 ping statistics --- 00:39:49.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:49.911 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:49.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:49.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:39:49.911 00:39:49.911 --- 10.0.0.1 ping statistics --- 00:39:49.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:49.911 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:39:49.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:49.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:39:49.911 00:39:49.911 --- 10.0.0.2 ping statistics --- 00:39:49.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:49.911 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:39:49.911 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=95281 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 95281 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # '[' -z 95281 ']' 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:49.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:49.912 05:51:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:50.169 [2024-11-20 05:51:09.880568] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:50.169 [2024-11-20 05:51:09.880665] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:50.169 [2024-11-20 05:51:10.037330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:50.426 [2024-11-20 05:51:10.090029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:50.426 [2024-11-20 05:51:10.090089] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:50.426 [2024-11-20 05:51:10.090105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:50.427 [2024-11-20 05:51:10.090118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:50.427 [2024-11-20 05:51:10.090129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:50.427 [2024-11-20 05:51:10.090516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:50.992 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:50.992 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@866 -- # return 0 00:39:50.992 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:50.992 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:50.992 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:50.992 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:50.992 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:39:50.992 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.992 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:50.992 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:50.992 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:39:50.992 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:50.992 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.250 [2024-11-20 05:51:10.948432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.250 [2024-11-20 05:51:10.960581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.250 null0 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.250 null1 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.250 null2 00:39:51.250 05:51:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.250 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:39:51.250 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.250 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.250 null3 00:39:51.250 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.250 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:39:51.250 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:51.250 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.250 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:51.250 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=95332 00:39:51.250 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:39:51.250 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 95332 /tmp/host.sock 00:39:51.251 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # '[' -z 95332 ']' 00:39:51.251 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:39:51.251 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:51.251 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:39:51.251 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:39:51.251 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:51.251 05:51:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:51.251 [2024-11-20 05:51:11.088618] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:51.251 [2024-11-20 05:51:11.088922] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95332 ] 00:39:51.508 [2024-11-20 05:51:11.247298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:51.508 [2024-11-20 05:51:11.307099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:52.443 05:51:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:52.443 05:51:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@866 -- # return 0 00:39:52.443 05:51:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:39:52.443 05:51:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:39:52.443 05:51:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:39:52.443 05:51:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=95362 00:39:52.443 05:51:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:39:52.443 05:51:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:39:52.443 05:51:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:39:52.443 Process 1063 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:39:52.443 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:39:52.443 Successfully dropped root privileges. 00:39:52.443 avahi-daemon 0.8 starting up. 00:39:52.443 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:39:52.443 Successfully called chroot(). 00:39:52.443 Successfully dropped remaining capabilities. 00:39:53.378 No service file found in /etc/avahi/services. 00:39:53.378 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:39:53.378 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:39:53.378 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:39:53.378 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:39:53.378 Network interface enumeration completed. 00:39:53.378 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:39:53.378 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:39:53.378 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:39:53.378 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:39:53.378 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 184402612. 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:39:53.378 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.379 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:39:53.379 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:39:53.379 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.379 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.379 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.379 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:39:53.379 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:39:53.379 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:53.379 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.379 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.379 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:39:53.379 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:39:53.379 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.638 [2024-11-20 05:51:13.451383] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.638 [2024-11-20 05:51:13.473072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:53.638 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.639 05:51:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:39:54.573 [2024-11-20 05:51:14.351404] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:39:55.140 [2024-11-20 05:51:14.751415] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:39:55.140 [2024-11-20 05:51:14.751443] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:39:55.140 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:39:55.140 cookie is 0 00:39:55.140 is_local: 1 00:39:55.140 our_own: 0 00:39:55.140 wide_area: 0 00:39:55.140 multicast: 1 00:39:55.140 cached: 1 00:39:55.140 [2024-11-20 05:51:14.851409] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:39:55.140 [2024-11-20 05:51:14.851428] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:39:55.140 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:39:55.140 cookie is 0 00:39:55.140 is_local: 1 00:39:55.140 our_own: 0 00:39:55.140 wide_area: 0 00:39:55.140 multicast: 1 00:39:55.140 cached: 1 00:39:56.076 [2024-11-20 05:51:15.752330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:56.076 [2024-11-20 05:51:15.752375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1873710 with addr=10.0.0.4, port=8009 00:39:56.076 [2024-11-20 05:51:15.752425] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:39:56.076 [2024-11-20 05:51:15.752437] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:56.076 [2024-11-20 05:51:15.752446] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:39:56.076 [2024-11-20 05:51:15.862999] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:39:56.076 [2024-11-20 05:51:15.863026] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:39:56.076 [2024-11-20 05:51:15.863055] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:39:56.076 [2024-11-20 05:51:15.949095] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:39:56.335 [2024-11-20 05:51:16.003435] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:39:56.335 [2024-11-20 05:51:16.004106] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x18a8e90:1 started. 00:39:56.335 [2024-11-20 05:51:16.005818] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:39:56.335 [2024-11-20 05:51:16.005840] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:39:56.335 [2024-11-20 05:51:16.011690] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x18a8e90 was disconnected and freed. delete nvme_qpair. 00:39:56.902 [2024-11-20 05:51:16.752261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:56.902 [2024-11-20 05:51:16.752305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1873350 with addr=10.0.0.4, port=8009 00:39:56.902 [2024-11-20 05:51:16.752324] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:39:56.902 [2024-11-20 05:51:16.752348] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:56.902 [2024-11-20 05:51:16.752357] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:39:57.839 [2024-11-20 05:51:17.752266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:39:57.839 [2024-11-20 05:51:17.752308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1891a90 with addr=10.0.0.4, port=8009 00:39:57.839 [2024-11-20 05:51:17.752326] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:39:57.839 [2024-11-20 05:51:17.752334] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:57.839 [2024-11-20 05:51:17.752343] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:39:58.776 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:39:58.776 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:39:58.776 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:58.776 [2024-11-20 05:51:18.562765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:39:58.776 [2024-11-20 05:51:18.564414] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:39:58.776 [2024-11-20 05:51:18.564440] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:58.776 [2024-11-20 05:51:18.570704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:39:58.776 [2024-11-20 05:51:18.571399] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:58.776 05:51:18 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:39:59.036 [2024-11-20 05:51:18.703900] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:39:59.036 [2024-11-20 05:51:18.703931] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:39:59.036 [2024-11-20 05:51:18.763292] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:39:59.036 [2024-11-20 05:51:18.763312] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:39:59.036 [2024-11-20 05:51:18.763323] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:39:59.036 [2024-11-20 05:51:18.790411] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:39:59.036 [2024-11-20 05:51:18.849381] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:39:59.036 [2024-11-20 05:51:18.903683] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:39:59.036 [2024-11-20 05:51:18.904169] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x18a5940:1 started. 00:39:59.036 [2024-11-20 05:51:18.905467] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:39:59.036 [2024-11-20 05:51:18.905489] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:39:59.036 [2024-11-20 05:51:18.912134] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x18a5940 was disconnected and freed. delete nvme_qpair. 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:39:59.973 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:39:59.973 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:39:59.973 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:39:59.973 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:39:59.973 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:39:59.973 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:39:59.973 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:39:59.973 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.973 [2024-11-20 05:51:19.751483] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:39:59.974 [2024-11-20 05:51:19.751505] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:39:59.974 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:39:59.974 cookie is 0 00:39:59.974 is_local: 1 00:39:59.974 our_own: 0 00:39:59.974 wide_area: 0 00:39:59.974 multicast: 1 00:39:59.974 cached: 1 00:39:59.974 [2024-11-20 05:51:19.751516] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:39:59.974 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:00.233 [2024-11-20 05:51:19.963566] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1892f40:1 started. 00:40:00.233 [2024-11-20 05:51:19.966783] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x19d6d90:1 started. 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:00.233 05:51:19 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:40:00.233 [2024-11-20 05:51:19.972503] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1892f40 was disconnected and freed. delete nvme_qpair. 00:40:00.233 [2024-11-20 05:51:19.972824] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x19d6d90 was disconnected and freed. delete nvme_qpair. 00:40:00.233 [2024-11-20 05:51:20.051491] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:40:00.233 [2024-11-20 05:51:20.051511] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:40:00.233 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:40:00.233 cookie is 0 00:40:00.233 is_local: 1 00:40:00.233 our_own: 0 00:40:00.233 wide_area: 0 00:40:00.233 multicast: 1 00:40:00.233 cached: 1 00:40:00.233 [2024-11-20 05:51:20.051521] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:40:01.170 05:51:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:40:01.170 05:51:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:01.170 05:51:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.170 05:51:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:01.170 05:51:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:40:01.170 05:51:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:40:01.170 05:51:20 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:40:01.170 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.170 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:40:01.170 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:40:01.170 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:40:01.170 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:40:01.170 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.170 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:01.170 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.430 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:40:01.430 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:40:01.430 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:40:01.430 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:40:01.430 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.430 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:01.430 [2024-11-20 05:51:21.108058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:40:01.430 [2024-11-20 05:51:21.108738] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:40:01.430 [2024-11-20 05:51:21.108763] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:40:01.430 [2024-11-20 05:51:21.108791] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:40:01.430 [2024-11-20 05:51:21.108801] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:40:01.430 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.430 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:40:01.430 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:01.430 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:01.430 [2024-11-20 05:51:21.116014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:40:01.430 [2024-11-20 05:51:21.116744] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:40:01.430 [2024-11-20 05:51:21.116783] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:40:01.430 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:01.430 05:51:21 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:40:01.430 [2024-11-20 05:51:21.247842] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:40:01.430 [2024-11-20 05:51:21.248155] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:40:01.430 [2024-11-20 05:51:21.307188] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:40:01.430 [2024-11-20 05:51:21.307232] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:40:01.430 [2024-11-20 05:51:21.307241] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:40:01.430 [2024-11-20 05:51:21.307247] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:40:01.430 [2024-11-20 05:51:21.307260] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:40:01.430 [2024-11-20 05:51:21.307401] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:40:01.430 [2024-11-20 05:51:21.307422] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:40:01.430 [2024-11-20 05:51:21.307429] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:40:01.430 [2024-11-20 05:51:21.307434] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:40:01.430 [2024-11-20 05:51:21.307445] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:40:01.690 [2024-11-20 05:51:21.352923] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:40:01.690 [2024-11-20 05:51:21.352939] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:40:01.690 [2024-11-20 05:51:21.352969] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:40:01.690 [2024-11-20 05:51:21.352976] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:02.257 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:02.516 [2024-11-20 05:51:22.424883] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:40:02.516 [2024-11-20 05:51:22.424913] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:40:02.516 [2024-11-20 05:51:22.424943] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:40:02.516 [2024-11-20 05:51:22.424953] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:02.516 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:02.779 [2024-11-20 05:51:22.433893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:02.779 [2024-11-20 05:51:22.433927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:02.779 [2024-11-20 05:51:22.433939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:02.779 [2024-11-20 05:51:22.433948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:02.779 [2024-11-20 05:51:22.433958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:02.779 [2024-11-20 05:51:22.433967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:02.779 [2024-11-20 05:51:22.433977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:02.779 [2024-11-20 05:51:22.433985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:02.779 [2024-11-20 05:51:22.433995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18857f0 is same with the state(6) to be set 00:40:02.779 [2024-11-20 05:51:22.436880] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:40:02.779 [2024-11-20 05:51:22.436922] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:40:02.779 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:02.779 05:51:22 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:40:02.779 [2024-11-20 05:51:22.443869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18857f0 (9): Bad file descriptor 00:40:02.779 [2024-11-20 05:51:22.445553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:02.779 [2024-11-20 05:51:22.445574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:02.779 [2024-11-20 05:51:22.445585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:02.779 [2024-11-20 05:51:22.445593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:02.779 [2024-11-20 05:51:22.445603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:02.779 [2024-11-20 05:51:22.445612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:02.779 [2024-11-20 05:51:22.445621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:02.779 [2024-11-20 05:51:22.445630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:02.779 [2024-11-20 05:51:22.445639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892080 is same with the state(6) to be set 00:40:02.779 [2024-11-20 05:51:22.453887] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:40:02.779 [2024-11-20 05:51:22.453903] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:40:02.779 [2024-11-20 05:51:22.453909] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:40:02.779 [2024-11-20 05:51:22.453915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:02.779 [2024-11-20 05:51:22.453959] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:40:02.779 [2024-11-20 05:51:22.454024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.779 [2024-11-20 05:51:22.454039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18857f0 with addr=10.0.0.3, port=4420 00:40:02.779 [2024-11-20 05:51:22.454049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18857f0 is same with the state(6) to be set 00:40:02.779 [2024-11-20 05:51:22.454062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18857f0 (9): Bad file descriptor 00:40:02.779 [2024-11-20 05:51:22.454076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:40:02.779 [2024-11-20 05:51:22.454085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:40:02.779 [2024-11-20 05:51:22.454095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:40:02.779 [2024-11-20 05:51:22.454103] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:40:02.779 [2024-11-20 05:51:22.454109] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:40:02.779 [2024-11-20 05:51:22.454114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:40:02.779 [2024-11-20 05:51:22.455525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892080 (9): Bad file descriptor 00:40:02.779 [2024-11-20 05:51:22.463966] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:40:02.779 [2024-11-20 05:51:22.463984] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:40:02.779 [2024-11-20 05:51:22.463990] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:40:02.779 [2024-11-20 05:51:22.463995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:02.779 [2024-11-20 05:51:22.464018] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:40:02.779 [2024-11-20 05:51:22.464061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.779 [2024-11-20 05:51:22.464075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18857f0 with addr=10.0.0.3, port=4420 00:40:02.779 [2024-11-20 05:51:22.464084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18857f0 is same with the state(6) to be set 00:40:02.779 [2024-11-20 05:51:22.464097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18857f0 (9): Bad file descriptor 00:40:02.779 [2024-11-20 05:51:22.464110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:40:02.779 [2024-11-20 05:51:22.464118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:40:02.779 [2024-11-20 05:51:22.464127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:40:02.779 [2024-11-20 05:51:22.464135] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:40:02.779 [2024-11-20 05:51:22.464140] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:40:02.779 [2024-11-20 05:51:22.464145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:40:02.779 [2024-11-20 05:51:22.465533] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:40:02.779 [2024-11-20 05:51:22.465550] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:40:02.779 [2024-11-20 05:51:22.465555] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:40:02.779 [2024-11-20 05:51:22.465560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:40:02.779 [2024-11-20 05:51:22.465582] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:40:02.779 [2024-11-20 05:51:22.465619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.779 [2024-11-20 05:51:22.465632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892080 with addr=10.0.0.4, port=4420 00:40:02.779 [2024-11-20 05:51:22.465641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892080 is same with the state(6) to be set 00:40:02.779 [2024-11-20 05:51:22.465654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892080 (9): Bad file descriptor 00:40:02.779 [2024-11-20 05:51:22.465665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:40:02.779 [2024-11-20 05:51:22.465674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:40:02.779 [2024-11-20 05:51:22.465683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:40:02.779 [2024-11-20 05:51:22.465690] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:40:02.779 [2024-11-20 05:51:22.465695] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:40:02.779 [2024-11-20 05:51:22.465700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:40:02.779 [2024-11-20 05:51:22.474027] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:40:02.780 [2024-11-20 05:51:22.474044] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:40:02.780 [2024-11-20 05:51:22.474050] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:40:02.780 [2024-11-20 05:51:22.474055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:02.780 [2024-11-20 05:51:22.474092] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:40:02.780 [2024-11-20 05:51:22.474134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.780 [2024-11-20 05:51:22.474147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18857f0 with addr=10.0.0.3, port=4420 00:40:02.780 [2024-11-20 05:51:22.474157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18857f0 is same with the state(6) to be set 00:40:02.780 [2024-11-20 05:51:22.474169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18857f0 (9): Bad file descriptor 00:40:02.780 [2024-11-20 05:51:22.474181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:40:02.780 [2024-11-20 05:51:22.474189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:40:02.780 [2024-11-20 05:51:22.474198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:40:02.780 [2024-11-20 05:51:22.474206] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:40:02.780 [2024-11-20 05:51:22.474211] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:40:02.780 [2024-11-20 05:51:22.474216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:40:02.780 [2024-11-20 05:51:22.475592] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:40:02.780 [2024-11-20 05:51:22.475609] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:40:02.780 [2024-11-20 05:51:22.475615] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:40:02.780 [2024-11-20 05:51:22.475620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:40:02.780 [2024-11-20 05:51:22.475641] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:40:02.780 [2024-11-20 05:51:22.475678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.780 [2024-11-20 05:51:22.475691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892080 with addr=10.0.0.4, port=4420 00:40:02.780 [2024-11-20 05:51:22.475700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892080 is same with the state(6) to be set 00:40:02.780 [2024-11-20 05:51:22.475712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892080 (9): Bad file descriptor 00:40:02.780 [2024-11-20 05:51:22.475724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:40:02.780 [2024-11-20 05:51:22.475733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:40:02.780 [2024-11-20 05:51:22.475741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:40:02.780 [2024-11-20 05:51:22.475749] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:40:02.780 [2024-11-20 05:51:22.475754] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:40:02.780 [2024-11-20 05:51:22.475759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:40:02.780 [2024-11-20 05:51:22.484103] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:40:02.780 [2024-11-20 05:51:22.484124] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:40:02.780 [2024-11-20 05:51:22.484130] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:40:02.780 [2024-11-20 05:51:22.484135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:02.780 [2024-11-20 05:51:22.484158] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:40:02.780 [2024-11-20 05:51:22.484217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.780 [2024-11-20 05:51:22.484231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18857f0 with addr=10.0.0.3, port=4420 00:40:02.780 [2024-11-20 05:51:22.484240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18857f0 is same with the state(6) to be set 00:40:02.780 [2024-11-20 05:51:22.484254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18857f0 (9): Bad file descriptor 00:40:02.780 [2024-11-20 05:51:22.484266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:40:02.780 [2024-11-20 05:51:22.484274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:40:02.780 [2024-11-20 05:51:22.484283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:40:02.780 [2024-11-20 05:51:22.484291] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:40:02.780 [2024-11-20 05:51:22.484296] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:40:02.780 [2024-11-20 05:51:22.484301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:40:02.780 [2024-11-20 05:51:22.485650] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:40:02.780 [2024-11-20 05:51:22.485667] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:40:02.780 [2024-11-20 05:51:22.485673] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:40:02.780 [2024-11-20 05:51:22.485678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:40:02.780 [2024-11-20 05:51:22.485700] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:40:02.780 [2024-11-20 05:51:22.485735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.780 [2024-11-20 05:51:22.485748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892080 with addr=10.0.0.4, port=4420 00:40:02.780 [2024-11-20 05:51:22.485758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892080 is same with the state(6) to be set 00:40:02.780 [2024-11-20 05:51:22.485770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892080 (9): Bad file descriptor 00:40:02.780 [2024-11-20 05:51:22.485781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:40:02.780 [2024-11-20 05:51:22.485790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:40:02.780 [2024-11-20 05:51:22.485798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:40:02.780 [2024-11-20 05:51:22.485805] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:40:02.780 [2024-11-20 05:51:22.485811] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:40:02.780 [2024-11-20 05:51:22.485815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:40:02.780 [2024-11-20 05:51:22.494166] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:40:02.780 [2024-11-20 05:51:22.494180] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:40:02.780 [2024-11-20 05:51:22.494185] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:40:02.780 [2024-11-20 05:51:22.494190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:02.780 [2024-11-20 05:51:22.494227] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:40:02.780 [2024-11-20 05:51:22.494263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.780 [2024-11-20 05:51:22.494276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18857f0 with addr=10.0.0.3, port=4420 00:40:02.780 [2024-11-20 05:51:22.494285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18857f0 is same with the state(6) to be set 00:40:02.780 [2024-11-20 05:51:22.494297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18857f0 (9): Bad file descriptor 00:40:02.780 [2024-11-20 05:51:22.494309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:40:02.780 [2024-11-20 05:51:22.494317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:40:02.780 [2024-11-20 05:51:22.494325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:40:02.780 [2024-11-20 05:51:22.494349] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:40:02.780 [2024-11-20 05:51:22.494354] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:40:02.780 [2024-11-20 05:51:22.494359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:40:02.780 [2024-11-20 05:51:22.495708] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:40:02.780 [2024-11-20 05:51:22.495725] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:40:02.780 [2024-11-20 05:51:22.495731] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:40:02.780 [2024-11-20 05:51:22.495736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:40:02.780 [2024-11-20 05:51:22.495757] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:40:02.780 [2024-11-20 05:51:22.495792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.780 [2024-11-20 05:51:22.495805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892080 with addr=10.0.0.4, port=4420 00:40:02.780 [2024-11-20 05:51:22.495814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892080 is same with the state(6) to be set 00:40:02.781 [2024-11-20 05:51:22.495826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892080 (9): Bad file descriptor 00:40:02.781 [2024-11-20 05:51:22.495838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:40:02.781 [2024-11-20 05:51:22.495846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:40:02.781 [2024-11-20 05:51:22.495854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:40:02.781 [2024-11-20 05:51:22.495862] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:40:02.781 [2024-11-20 05:51:22.495867] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:40:02.781 [2024-11-20 05:51:22.495872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:40:02.781 [2024-11-20 05:51:22.504233] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:40:02.781 [2024-11-20 05:51:22.504250] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:40:02.781 [2024-11-20 05:51:22.504255] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:40:02.781 [2024-11-20 05:51:22.504260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:02.781 [2024-11-20 05:51:22.504280] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:40:02.781 [2024-11-20 05:51:22.504316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.781 [2024-11-20 05:51:22.504328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18857f0 with addr=10.0.0.3, port=4420 00:40:02.781 [2024-11-20 05:51:22.504337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18857f0 is same with the state(6) to be set 00:40:02.781 [2024-11-20 05:51:22.504349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18857f0 (9): Bad file descriptor 00:40:02.781 [2024-11-20 05:51:22.504360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:40:02.781 [2024-11-20 05:51:22.504369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:40:02.781 [2024-11-20 05:51:22.504377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:40:02.781 [2024-11-20 05:51:22.504384] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:40:02.781 [2024-11-20 05:51:22.504389] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:40:02.781 [2024-11-20 05:51:22.504394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:40:02.781 [2024-11-20 05:51:22.505765] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:40:02.781 [2024-11-20 05:51:22.505782] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:40:02.781 [2024-11-20 05:51:22.505787] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:40:02.781 [2024-11-20 05:51:22.505792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:40:02.781 [2024-11-20 05:51:22.505813] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:40:02.781 [2024-11-20 05:51:22.505848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.781 [2024-11-20 05:51:22.505861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892080 with addr=10.0.0.4, port=4420 00:40:02.781 [2024-11-20 05:51:22.505870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892080 is same with the state(6) to be set 00:40:02.781 [2024-11-20 05:51:22.505882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892080 (9): Bad file descriptor 00:40:02.781 [2024-11-20 05:51:22.505894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:40:02.781 [2024-11-20 05:51:22.505902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:40:02.781 [2024-11-20 05:51:22.505910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:40:02.781 [2024-11-20 05:51:22.505918] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:40:02.781 [2024-11-20 05:51:22.505923] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:40:02.781 [2024-11-20 05:51:22.505928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:40:02.781 [2024-11-20 05:51:22.514287] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:40:02.781 [2024-11-20 05:51:22.514302] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:40:02.781 [2024-11-20 05:51:22.514307] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:40:02.781 [2024-11-20 05:51:22.514312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:02.781 [2024-11-20 05:51:22.514348] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:40:02.781 [2024-11-20 05:51:22.514383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.781 [2024-11-20 05:51:22.514396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18857f0 with addr=10.0.0.3, port=4420 00:40:02.781 [2024-11-20 05:51:22.514405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18857f0 is same with the state(6) to be set 00:40:02.781 [2024-11-20 05:51:22.514417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18857f0 (9): Bad file descriptor 00:40:02.781 [2024-11-20 05:51:22.514429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:40:02.781 [2024-11-20 05:51:22.514437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:40:02.781 [2024-11-20 05:51:22.514454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:40:02.781 [2024-11-20 05:51:22.514461] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:40:02.781 [2024-11-20 05:51:22.514466] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:40:02.781 [2024-11-20 05:51:22.514471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:40:02.781 [2024-11-20 05:51:22.515822] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:40:02.781 [2024-11-20 05:51:22.515839] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:40:02.781 [2024-11-20 05:51:22.515845] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:40:02.781 [2024-11-20 05:51:22.515850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:40:02.781 [2024-11-20 05:51:22.515871] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:40:02.781 [2024-11-20 05:51:22.515907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.781 [2024-11-20 05:51:22.515920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892080 with addr=10.0.0.4, port=4420 00:40:02.781 [2024-11-20 05:51:22.515929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892080 is same with the state(6) to be set 00:40:02.781 [2024-11-20 05:51:22.515941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892080 (9): Bad file descriptor 00:40:02.781 [2024-11-20 05:51:22.515953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:40:02.781 [2024-11-20 05:51:22.515961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:40:02.781 [2024-11-20 05:51:22.515970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:40:02.781 [2024-11-20 05:51:22.515977] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:40:02.781 [2024-11-20 05:51:22.515982] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:40:02.781 [2024-11-20 05:51:22.515987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:40:02.781 [2024-11-20 05:51:22.524358] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:40:02.781 [2024-11-20 05:51:22.524381] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:40:02.781 [2024-11-20 05:51:22.524387] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:40:02.781 [2024-11-20 05:51:22.524392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:02.781 [2024-11-20 05:51:22.524416] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:40:02.781 [2024-11-20 05:51:22.524468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.781 [2024-11-20 05:51:22.524482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18857f0 with addr=10.0.0.3, port=4420 00:40:02.781 [2024-11-20 05:51:22.524492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18857f0 is same with the state(6) to be set 00:40:02.781 [2024-11-20 05:51:22.524504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18857f0 (9): Bad file descriptor 00:40:02.781 [2024-11-20 05:51:22.524517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:40:02.781 [2024-11-20 05:51:22.524525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:40:02.781 [2024-11-20 05:51:22.524533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:40:02.781 [2024-11-20 05:51:22.524541] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:40:02.781 [2024-11-20 05:51:22.524546] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:40:02.781 [2024-11-20 05:51:22.524551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:40:02.781 [2024-11-20 05:51:22.525880] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:40:02.781 [2024-11-20 05:51:22.525897] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:40:02.782 [2024-11-20 05:51:22.525903] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:40:02.782 [2024-11-20 05:51:22.525908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:40:02.782 [2024-11-20 05:51:22.525931] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:40:02.782 [2024-11-20 05:51:22.525969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.782 [2024-11-20 05:51:22.525983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892080 with addr=10.0.0.4, port=4420 00:40:02.782 [2024-11-20 05:51:22.525992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892080 is same with the state(6) to be set 00:40:02.782 [2024-11-20 05:51:22.526004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892080 (9): Bad file descriptor 00:40:02.782 [2024-11-20 05:51:22.526078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:40:02.782 [2024-11-20 05:51:22.526088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:40:02.782 [2024-11-20 05:51:22.526097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:40:02.782 [2024-11-20 05:51:22.526104] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:40:02.782 [2024-11-20 05:51:22.526110] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:40:02.782 [2024-11-20 05:51:22.526116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:40:02.782 [2024-11-20 05:51:22.534424] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:40:02.782 [2024-11-20 05:51:22.534442] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:40:02.782 [2024-11-20 05:51:22.534452] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:40:02.782 [2024-11-20 05:51:22.534457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:02.782 [2024-11-20 05:51:22.534495] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:40:02.782 [2024-11-20 05:51:22.534534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.782 [2024-11-20 05:51:22.534548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18857f0 with addr=10.0.0.3, port=4420 00:40:02.782 [2024-11-20 05:51:22.534557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18857f0 is same with the state(6) to be set 00:40:02.782 [2024-11-20 05:51:22.534569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18857f0 (9): Bad file descriptor 00:40:02.782 [2024-11-20 05:51:22.534587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:40:02.782 [2024-11-20 05:51:22.534595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:40:02.782 [2024-11-20 05:51:22.534604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:40:02.782 [2024-11-20 05:51:22.534611] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:40:02.782 [2024-11-20 05:51:22.534616] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:40:02.782 [2024-11-20 05:51:22.534621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:40:02.782 [2024-11-20 05:51:22.535938] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:40:02.782 [2024-11-20 05:51:22.535956] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:40:02.782 [2024-11-20 05:51:22.535961] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:40:02.782 [2024-11-20 05:51:22.535967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:40:02.782 [2024-11-20 05:51:22.535988] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:40:02.782 [2024-11-20 05:51:22.536024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.782 [2024-11-20 05:51:22.536037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892080 with addr=10.0.0.4, port=4420 00:40:02.782 [2024-11-20 05:51:22.536046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892080 is same with the state(6) to be set 00:40:02.782 [2024-11-20 05:51:22.536058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892080 (9): Bad file descriptor 00:40:02.782 [2024-11-20 05:51:22.536085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:40:02.782 [2024-11-20 05:51:22.536094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:40:02.782 [2024-11-20 05:51:22.536102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:40:02.782 [2024-11-20 05:51:22.536109] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:40:02.782 [2024-11-20 05:51:22.536115] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:40:02.782 [2024-11-20 05:51:22.536119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:40:02.782 [2024-11-20 05:51:22.544503] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:40:02.782 [2024-11-20 05:51:22.544520] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:40:02.782 [2024-11-20 05:51:22.544525] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:40:02.782 [2024-11-20 05:51:22.544530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:02.782 [2024-11-20 05:51:22.544550] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:40:02.782 [2024-11-20 05:51:22.544585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.782 [2024-11-20 05:51:22.544598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18857f0 with addr=10.0.0.3, port=4420 00:40:02.782 [2024-11-20 05:51:22.544607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18857f0 is same with the state(6) to be set 00:40:02.782 [2024-11-20 05:51:22.544619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18857f0 (9): Bad file descriptor 00:40:02.782 [2024-11-20 05:51:22.544630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:40:02.782 [2024-11-20 05:51:22.544638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:40:02.782 [2024-11-20 05:51:22.544647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:40:02.782 [2024-11-20 05:51:22.544654] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:40:02.782 [2024-11-20 05:51:22.544659] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:40:02.782 [2024-11-20 05:51:22.544664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:40:02.782 [2024-11-20 05:51:22.545996] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:40:02.782 [2024-11-20 05:51:22.546013] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:40:02.782 [2024-11-20 05:51:22.546018] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:40:02.782 [2024-11-20 05:51:22.546023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:40:02.782 [2024-11-20 05:51:22.546044] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:40:02.782 [2024-11-20 05:51:22.546079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.782 [2024-11-20 05:51:22.546092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892080 with addr=10.0.0.4, port=4420 00:40:02.782 [2024-11-20 05:51:22.546101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892080 is same with the state(6) to be set 00:40:02.782 [2024-11-20 05:51:22.546113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892080 (9): Bad file descriptor 00:40:02.782 [2024-11-20 05:51:22.546138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:40:02.782 [2024-11-20 05:51:22.546147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:40:02.782 [2024-11-20 05:51:22.546155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:40:02.782 [2024-11-20 05:51:22.546163] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:40:02.782 [2024-11-20 05:51:22.546168] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:40:02.782 [2024-11-20 05:51:22.546173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:40:02.782 [2024-11-20 05:51:22.554557] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:40:02.782 [2024-11-20 05:51:22.554571] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:40:02.782 [2024-11-20 05:51:22.554576] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:40:02.782 [2024-11-20 05:51:22.554581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:02.782 [2024-11-20 05:51:22.554618] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:40:02.782 [2024-11-20 05:51:22.554659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.782 [2024-11-20 05:51:22.554672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18857f0 with addr=10.0.0.3, port=4420 00:40:02.783 [2024-11-20 05:51:22.554681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18857f0 is same with the state(6) to be set 00:40:02.783 [2024-11-20 05:51:22.554692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18857f0 (9): Bad file descriptor 00:40:02.783 [2024-11-20 05:51:22.554704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:40:02.783 [2024-11-20 05:51:22.554712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:40:02.783 [2024-11-20 05:51:22.554720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:40:02.783 [2024-11-20 05:51:22.554728] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:40:02.783 [2024-11-20 05:51:22.554733] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:40:02.783 [2024-11-20 05:51:22.554737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:40:02.783 [2024-11-20 05:51:22.556052] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:40:02.783 [2024-11-20 05:51:22.556069] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:40:02.783 [2024-11-20 05:51:22.556074] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:40:02.783 [2024-11-20 05:51:22.556080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:40:02.783 [2024-11-20 05:51:22.556100] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:40:02.783 [2024-11-20 05:51:22.556135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.783 [2024-11-20 05:51:22.556149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892080 with addr=10.0.0.4, port=4420 00:40:02.783 [2024-11-20 05:51:22.556158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892080 is same with the state(6) to be set 00:40:02.783 [2024-11-20 05:51:22.556169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892080 (9): Bad file descriptor 00:40:02.783 [2024-11-20 05:51:22.556195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:40:02.783 [2024-11-20 05:51:22.556204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:40:02.783 [2024-11-20 05:51:22.556213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:40:02.783 [2024-11-20 05:51:22.556221] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:40:02.783 [2024-11-20 05:51:22.556226] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:40:02.783 [2024-11-20 05:51:22.556231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:40:02.783 [2024-11-20 05:51:22.564626] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:40:02.783 [2024-11-20 05:51:22.564640] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:40:02.783 [2024-11-20 05:51:22.564645] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:40:02.783 [2024-11-20 05:51:22.564650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:02.783 [2024-11-20 05:51:22.564685] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:40:02.783 [2024-11-20 05:51:22.564720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.783 [2024-11-20 05:51:22.564733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18857f0 with addr=10.0.0.3, port=4420 00:40:02.783 [2024-11-20 05:51:22.564741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18857f0 is same with the state(6) to be set 00:40:02.783 [2024-11-20 05:51:22.564753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18857f0 (9): Bad file descriptor 00:40:02.783 [2024-11-20 05:51:22.564765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:40:02.783 [2024-11-20 05:51:22.564773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:40:02.783 [2024-11-20 05:51:22.564782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:40:02.783 [2024-11-20 05:51:22.564789] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:40:02.783 [2024-11-20 05:51:22.564794] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:40:02.783 [2024-11-20 05:51:22.564799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:40:02.783 [2024-11-20 05:51:22.566108] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:40:02.783 [2024-11-20 05:51:22.566124] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:40:02.783 [2024-11-20 05:51:22.566129] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:40:02.783 [2024-11-20 05:51:22.566135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:40:02.783 [2024-11-20 05:51:22.566156] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:40:02.783 [2024-11-20 05:51:22.566189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:02.783 [2024-11-20 05:51:22.566202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1892080 with addr=10.0.0.4, port=4420 00:40:02.783 [2024-11-20 05:51:22.566211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892080 is same with the state(6) to be set 00:40:02.783 [2024-11-20 05:51:22.566222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1892080 (9): Bad file descriptor 00:40:02.783 [2024-11-20 05:51:22.566247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:40:02.783 [2024-11-20 05:51:22.566256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:40:02.783 [2024-11-20 05:51:22.566265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:40:02.783 [2024-11-20 05:51:22.566272] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:40:02.783 [2024-11-20 05:51:22.566277] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:40:02.783 [2024-11-20 05:51:22.566282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:40:02.783 [2024-11-20 05:51:22.568246] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:40:02.783 [2024-11-20 05:51:22.568268] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:40:02.783 [2024-11-20 05:51:22.568283] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:40:02.783 [2024-11-20 05:51:22.568310] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:40:02.783 [2024-11-20 05:51:22.568322] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:40:02.783 [2024-11-20 05:51:22.568333] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:40:02.783 [2024-11-20 05:51:22.654305] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:40:02.783 [2024-11-20 05:51:22.654361] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:40:03.760 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:40:04.020 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:40:04.020 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.020 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:04.020 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.020 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:40:04.020 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:40:04.020 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:40:04.020 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:40:04.020 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.020 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:04.020 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.020 05:51:23 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:40:04.020 [2024-11-20 05:51:23.751526] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:04.957 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:05.217 [2024-11-20 05:51:24.951466] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:40:05.217 2024/11/20 05:51:24 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:40:05.217 request: 00:40:05.217 { 00:40:05.217 "method": "bdev_nvme_start_mdns_discovery", 00:40:05.217 "params": { 00:40:05.217 "name": "mdns", 00:40:05.217 "svcname": "_nvme-disc._http", 00:40:05.217 "hostnqn": "nqn.2021-12.io.spdk:test" 00:40:05.217 } 00:40:05.217 } 00:40:05.217 Got JSON-RPC error response 00:40:05.217 GoRPCClient: error on JSON-RPC call 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:05.217 05:51:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:40:05.785 [2024-11-20 05:51:25.540063] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:40:05.785 [2024-11-20 05:51:25.640062] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:40:06.045 [2024-11-20 05:51:25.740067] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:40:06.045 [2024-11-20 05:51:25.740083] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:40:06.045 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:40:06.045 cookie is 0 00:40:06.045 is_local: 1 00:40:06.045 our_own: 0 00:40:06.045 wide_area: 0 00:40:06.045 multicast: 1 00:40:06.045 cached: 1 00:40:06.045 [2024-11-20 05:51:25.840067] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:40:06.045 [2024-11-20 05:51:25.840084] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:40:06.045 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:40:06.045 cookie is 0 00:40:06.045 is_local: 1 00:40:06.045 our_own: 0 00:40:06.045 wide_area: 0 00:40:06.045 multicast: 1 00:40:06.045 cached: 1 00:40:06.045 [2024-11-20 05:51:25.840093] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:40:06.045 [2024-11-20 05:51:25.940071] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:40:06.045 [2024-11-20 05:51:25.940088] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:40:06.045 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:40:06.045 cookie is 0 00:40:06.045 is_local: 1 00:40:06.045 our_own: 0 00:40:06.045 wide_area: 0 00:40:06.045 multicast: 1 00:40:06.045 cached: 1 00:40:06.304 [2024-11-20 05:51:26.040071] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:40:06.304 [2024-11-20 05:51:26.040088] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:40:06.304 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:40:06.304 cookie is 0 00:40:06.304 is_local: 1 00:40:06.304 our_own: 0 00:40:06.304 wide_area: 0 00:40:06.304 multicast: 1 00:40:06.304 cached: 1 00:40:06.305 [2024-11-20 05:51:26.040097] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:40:06.873 [2024-11-20 05:51:26.744610] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:40:06.874 [2024-11-20 05:51:26.744631] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:40:06.874 [2024-11-20 05:51:26.744644] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:40:07.132 [2024-11-20 05:51:26.830696] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:40:07.132 [2024-11-20 05:51:26.888985] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:40:07.132 [2024-11-20 05:51:26.889490] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x1872a20:1 started. 00:40:07.132 [2024-11-20 05:51:26.890833] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:40:07.132 [2024-11-20 05:51:26.890858] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:40:07.132 [2024-11-20 05:51:26.893470] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x1872a20 was disconnected and freed. delete nvme_qpair. 00:40:07.132 [2024-11-20 05:51:26.954418] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:40:07.132 [2024-11-20 05:51:26.954436] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:40:07.132 [2024-11-20 05:51:26.954470] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:40:07.132 [2024-11-20 05:51:27.041523] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:40:07.391 [2024-11-20 05:51:27.099798] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:40:07.391 [2024-11-20 05:51:27.100226] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x188da90:1 started. 00:40:07.391 [2024-11-20 05:51:27.101511] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:40:07.391 [2024-11-20 05:51:27.101535] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:40:07.391 [2024-11-20 05:51:27.103510] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x188da90 was disconnected and freed. delete nvme_qpair. 00:40:10.683 05:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:40:10.683 05:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:40:10.683 05:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:40:10.683 05:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.684 05:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:10.684 05:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:40:10.684 05:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:40:10.684 05:51:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:10.684 [2024-11-20 05:51:30.158592] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:40:10.684 2024/11/20 05:51:30 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:40:10.684 request: 00:40:10.684 { 00:40:10.684 "method": "bdev_nvme_start_mdns_discovery", 00:40:10.684 "params": { 00:40:10.684 "name": "cdc", 00:40:10.684 "svcname": "_nvme-disc._tcp", 00:40:10.684 "hostnqn": "nqn.2021-12.io.spdk:test" 00:40:10.684 } 00:40:10.684 } 00:40:10.684 Got JSON-RPC error response 00:40:10.684 GoRPCClient: error on JSON-RPC call 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:40:10.684 [2024-11-20 05:51:30.340122] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:40:10.684 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:40:10.684 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:40:10.684 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:40:10.684 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:40:10.684 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:40:10.685 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:40:10.685 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:40:10.685 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:10.685 05:51:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:40:11.622 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:40:11.622 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:40:11.622 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 95332 00:40:11.622 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 95332 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 95362 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:40:11.882 Got SIGTERM, quitting. 00:40:11.882 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:40:11.882 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:40:11.882 avahi-daemon 0.8 exiting. 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:11.882 rmmod nvme_tcp 00:40:11.882 rmmod nvme_fabrics 00:40:11.882 rmmod nvme_keyring 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 95281 ']' 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 95281 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # '[' -z 95281 ']' 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # kill -0 95281 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@957 -- # uname 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95281 00:40:11.882 killing process with pid 95281 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95281' 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@971 -- # kill 95281 00:40:11.882 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@976 -- # wait 95281 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:40:12.142 05:51:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:40:12.142 05:51:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:40:12.142 05:51:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:40:12.402 00:40:12.402 real 0m23.018s 00:40:12.402 user 0m43.578s 00:40:12.402 sys 0m3.179s 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:12.402 ************************************ 00:40:12.402 END TEST nvmf_mdns_discovery 00:40:12.402 ************************************ 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:12.402 ************************************ 00:40:12.402 START TEST nvmf_host_multipath 00:40:12.402 ************************************ 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:40:12.402 * Looking for test storage... 00:40:12.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:40:12.402 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:12.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.662 --rc genhtml_branch_coverage=1 00:40:12.662 --rc genhtml_function_coverage=1 00:40:12.662 --rc genhtml_legend=1 00:40:12.662 --rc geninfo_all_blocks=1 00:40:12.662 --rc geninfo_unexecuted_blocks=1 00:40:12.662 00:40:12.662 ' 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:12.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.662 --rc genhtml_branch_coverage=1 00:40:12.662 --rc genhtml_function_coverage=1 00:40:12.662 --rc genhtml_legend=1 00:40:12.662 --rc geninfo_all_blocks=1 00:40:12.662 --rc geninfo_unexecuted_blocks=1 00:40:12.662 00:40:12.662 ' 00:40:12.662 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:12.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.663 --rc genhtml_branch_coverage=1 00:40:12.663 --rc genhtml_function_coverage=1 00:40:12.663 --rc genhtml_legend=1 00:40:12.663 --rc geninfo_all_blocks=1 00:40:12.663 --rc geninfo_unexecuted_blocks=1 00:40:12.663 00:40:12.663 ' 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:12.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:12.663 --rc genhtml_branch_coverage=1 00:40:12.663 --rc genhtml_function_coverage=1 00:40:12.663 --rc genhtml_legend=1 00:40:12.663 --rc geninfo_all_blocks=1 00:40:12.663 --rc geninfo_unexecuted_blocks=1 00:40:12.663 00:40:12.663 ' 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:12.663 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:40:12.663 Cannot find device "nvmf_init_br" 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:40:12.663 Cannot find device "nvmf_init_br2" 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:40:12.663 Cannot find device "nvmf_tgt_br" 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:40:12.663 Cannot find device "nvmf_tgt_br2" 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:40:12.663 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:40:12.664 Cannot find device "nvmf_init_br" 00:40:12.664 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:40:12.664 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:40:12.664 Cannot find device "nvmf_init_br2" 00:40:12.664 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:40:12.664 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:40:12.664 Cannot find device "nvmf_tgt_br" 00:40:12.664 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:40:12.664 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:40:12.664 Cannot find device "nvmf_tgt_br2" 00:40:12.923 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:40:12.923 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:40:12.923 Cannot find device "nvmf_br" 00:40:12.923 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:40:12.923 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:40:12.923 Cannot find device "nvmf_init_if" 00:40:12.923 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:40:12.923 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:40:12.923 Cannot find device "nvmf_init_if2" 00:40:12.923 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:40:12.923 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:12.923 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:12.923 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:12.924 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:12.924 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:40:13.184 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:13.184 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:40:13.184 00:40:13.184 --- 10.0.0.3 ping statistics --- 00:40:13.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.184 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:40:13.184 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:40:13.184 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:40:13.184 00:40:13.184 --- 10.0.0.4 ping statistics --- 00:40:13.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.184 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:13.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:13.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:40:13.184 00:40:13.184 --- 10.0.0.1 ping statistics --- 00:40:13.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.184 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:40:13.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:13.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:40:13.184 00:40:13.184 --- 10.0.0.2 ping statistics --- 00:40:13.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.184 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:13.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=96014 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 96014 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 96014 ']' 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:13.184 05:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:13.184 [2024-11-20 05:51:32.971084] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:40:13.184 [2024-11-20 05:51:32.971389] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:13.444 [2024-11-20 05:51:33.128780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:13.444 [2024-11-20 05:51:33.200769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:13.444 [2024-11-20 05:51:33.201133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:13.444 [2024-11-20 05:51:33.201290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:13.444 [2024-11-20 05:51:33.201368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:13.444 [2024-11-20 05:51:33.201411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:13.444 [2024-11-20 05:51:33.202773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:13.444 [2024-11-20 05:51:33.202786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:14.382 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:14.382 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:40:14.382 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:14.382 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:14.382 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:14.382 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:14.382 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=96014 00:40:14.382 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:14.642 [2024-11-20 05:51:34.347388] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:14.642 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:40:14.901 Malloc0 00:40:14.901 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:40:15.161 05:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:15.161 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:40:15.420 [2024-11-20 05:51:35.246868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:40:15.420 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:40:15.680 [2024-11-20 05:51:35.450963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:40:15.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:15.680 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=96115 00:40:15.680 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:40:15.680 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:40:15.680 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 96115 /var/tmp/bdevperf.sock 00:40:15.680 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 96115 ']' 00:40:15.680 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:15.680 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:15.680 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:15.680 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:15.680 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:15.939 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:15.939 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:40:15.939 05:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:40:16.198 05:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:40:16.767 Nvme0n1 00:40:16.767 05:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:40:17.026 Nvme0n1 00:40:17.026 05:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:40:17.026 05:51:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:40:17.963 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:40:17.963 05:51:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:40:18.222 05:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:40:18.481 05:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:40:18.481 05:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96189 00:40:18.481 05:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96014 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:40:18.481 05:51:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:40:25.050 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:40:25.050 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:40:25.050 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:40:25.050 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:40:25.050 Attaching 4 probes... 00:40:25.050 @path[10.0.0.3, 4421]: 21000 00:40:25.050 @path[10.0.0.3, 4421]: 21274 00:40:25.050 @path[10.0.0.3, 4421]: 21290 00:40:25.050 @path[10.0.0.3, 4421]: 21063 00:40:25.050 @path[10.0.0.3, 4421]: 21120 00:40:25.050 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:40:25.050 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:40:25.050 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:40:25.050 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:40:25.050 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:40:25.050 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:40:25.050 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96189 00:40:25.050 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:40:25.050 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:40:25.050 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:40:25.050 05:51:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:40:25.310 05:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:40:25.310 05:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96331 00:40:25.310 05:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96014 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:40:25.310 05:51:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:40:31.879 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:40:31.879 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:40:31.879 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:40:31.879 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:40:31.879 Attaching 4 probes... 00:40:31.879 @path[10.0.0.3, 4420]: 19911 00:40:31.879 @path[10.0.0.3, 4420]: 20179 00:40:31.879 @path[10.0.0.3, 4420]: 20219 00:40:31.879 @path[10.0.0.3, 4420]: 20344 00:40:31.879 @path[10.0.0.3, 4420]: 20115 00:40:31.879 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:40:31.879 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:40:31.879 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:40:31.879 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:40:31.879 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:40:31.879 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:40:31.879 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96331 00:40:31.879 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:40:31.879 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:40:31.879 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:40:31.879 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:40:32.139 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:40:32.139 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96014 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:40:32.139 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96460 00:40:32.139 05:51:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:40:38.708 05:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:40:38.708 05:51:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:40:38.708 Attaching 4 probes... 00:40:38.708 @path[10.0.0.3, 4421]: 16331 00:40:38.708 @path[10.0.0.3, 4421]: 22979 00:40:38.708 @path[10.0.0.3, 4421]: 22943 00:40:38.708 @path[10.0.0.3, 4421]: 22748 00:40:38.708 @path[10.0.0.3, 4421]: 22609 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96460 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96594 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96014 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:40:38.708 05:51:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:40:45.303 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:40:45.303 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:40:45.303 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:40:45.303 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:40:45.303 Attaching 4 probes... 00:40:45.303 00:40:45.303 00:40:45.303 00:40:45.303 00:40:45.303 00:40:45.303 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:40:45.303 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:40:45.303 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:40:45.303 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:40:45.303 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:40:45.303 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:40:45.303 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96594 00:40:45.303 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:40:45.303 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:40:45.303 05:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:40:45.303 05:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:40:45.562 05:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:40:45.563 05:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96724 00:40:45.563 05:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:40:45.563 05:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96014 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:40:52.132 05:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:40:52.132 05:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:40:52.132 05:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:40:52.132 05:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:40:52.132 Attaching 4 probes... 00:40:52.132 @path[10.0.0.3, 4421]: 21058 00:40:52.132 @path[10.0.0.3, 4421]: 21415 00:40:52.132 @path[10.0.0.3, 4421]: 21619 00:40:52.132 @path[10.0.0.3, 4421]: 21309 00:40:52.132 @path[10.0.0.3, 4421]: 21510 00:40:52.132 05:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:40:52.132 05:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:40:52.132 05:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:40:52.132 05:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:40:52.132 05:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:40:52.132 05:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:40:52.132 05:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96724 00:40:52.132 05:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:40:52.132 05:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:40:52.132 [2024-11-20 05:52:11.678239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.132 [2024-11-20 05:52:11.678674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.678995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 [2024-11-20 05:52:11.679003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8f8eb0 is same with the state(6) to be set 00:40:52.133 05:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:40:53.065 05:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:40:53.065 05:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96860 00:40:53.065 05:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:40:53.065 05:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96014 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:40:59.642 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:40:59.642 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:40:59.642 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:40:59.642 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:40:59.642 Attaching 4 probes... 00:40:59.642 @path[10.0.0.3, 4420]: 19682 00:40:59.642 @path[10.0.0.3, 4420]: 20558 00:40:59.642 @path[10.0.0.3, 4420]: 20527 00:40:59.642 @path[10.0.0.3, 4420]: 20536 00:40:59.642 @path[10.0.0.3, 4420]: 20461 00:40:59.642 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:40:59.642 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:40:59.642 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:40:59.642 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:40:59.642 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:40:59.642 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:40:59.642 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96860 00:40:59.642 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:40:59.642 05:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:40:59.642 [2024-11-20 05:52:19.273758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:40:59.642 05:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:40:59.900 05:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:41:06.463 05:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:41:06.463 05:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97052 00:41:06.463 05:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96014 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:41:06.463 05:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:41:11.735 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:41:11.735 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:41:11.994 Attaching 4 probes... 00:41:11.994 @path[10.0.0.3, 4421]: 20204 00:41:11.994 @path[10.0.0.3, 4421]: 20512 00:41:11.994 @path[10.0.0.3, 4421]: 20493 00:41:11.994 @path[10.0.0.3, 4421]: 20489 00:41:11.994 @path[10.0.0.3, 4421]: 20720 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97052 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 96115 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 96115 ']' 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 96115 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:11.994 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 96115 00:41:12.261 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:41:12.261 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:41:12.261 killing process with pid 96115 00:41:12.261 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 96115' 00:41:12.261 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 96115 00:41:12.261 05:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 96115 00:41:12.261 { 00:41:12.261 "results": [ 00:41:12.261 { 00:41:12.261 "job": "Nvme0n1", 00:41:12.261 "core_mask": "0x4", 00:41:12.261 "workload": "verify", 00:41:12.261 "status": "terminated", 00:41:12.261 "verify_range": { 00:41:12.261 "start": 0, 00:41:12.261 "length": 16384 00:41:12.261 }, 00:41:12.261 "queue_depth": 128, 00:41:12.261 "io_size": 4096, 00:41:12.261 "runtime": 55.077765, 00:41:12.261 "iops": 8979.213299595582, 00:41:12.261 "mibps": 35.07505195154524, 00:41:12.261 "io_failed": 0, 00:41:12.261 "io_timeout": 0, 00:41:12.261 "avg_latency_us": 14236.155058537954, 00:41:12.261 "min_latency_us": 963.5352380952381, 00:41:12.261 "max_latency_us": 7030452.419047619 00:41:12.261 } 00:41:12.261 ], 00:41:12.261 "core_count": 1 00:41:12.261 } 00:41:12.261 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 96115 00:41:12.261 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:41:12.261 [2024-11-20 05:51:35.506388] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:41:12.261 [2024-11-20 05:51:35.506480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96115 ] 00:41:12.261 [2024-11-20 05:51:35.658479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:12.261 [2024-11-20 05:51:35.713907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:12.261 Running I/O for 90 seconds... 00:41:12.261 10839.00 IOPS, 42.34 MiB/s [2024-11-20T05:52:32.180Z] 10909.50 IOPS, 42.62 MiB/s [2024-11-20T05:52:32.180Z] 10841.33 IOPS, 42.35 MiB/s [2024-11-20T05:52:32.180Z] 10799.25 IOPS, 42.18 MiB/s [2024-11-20T05:52:32.180Z] 10768.20 IOPS, 42.06 MiB/s [2024-11-20T05:52:32.180Z] 10734.67 IOPS, 41.93 MiB/s [2024-11-20T05:52:32.180Z] 10705.57 IOPS, 41.82 MiB/s [2024-11-20T05:52:32.180Z] 10690.62 IOPS, 41.76 MiB/s [2024-11-20T05:52:32.180Z] [2024-11-20 05:51:45.019983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.261 [2024-11-20 05:51:45.020033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.020077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.020092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.020111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.020126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.020145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.020158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.020176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.020190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.020208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.020222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.020240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.020253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.020271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.020284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.020303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.020316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.020334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.020347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.020386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.020399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.020417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.020431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.020461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.020475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.020493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.020507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.020525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.020538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.020569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.020598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.023177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.023204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.023231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.023244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.023262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.261 [2024-11-20 05:51:45.023275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.023293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.261 [2024-11-20 05:51:45.023306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.023333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.261 [2024-11-20 05:51:45.023363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.023382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.261 [2024-11-20 05:51:45.023395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.023423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.261 [2024-11-20 05:51:45.023437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.023455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.261 [2024-11-20 05:51:45.023482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.023500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.261 [2024-11-20 05:51:45.023514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.023533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.261 [2024-11-20 05:51:45.023546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:12.261 [2024-11-20 05:51:45.023565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.261 [2024-11-20 05:51:45.023578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.023596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.023610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.023628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.023641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.023660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.023673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.023692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.023705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.023723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.023737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.023755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.023768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.023788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.023802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.023820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.023839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.023858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.023871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.023890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.023903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.023921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.023934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.023953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.023966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.023984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.023998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.024030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.024061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.024093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.024125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.024157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.024188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.024224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.024256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.024289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.262 [2024-11-20 05:51:45.024681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.262 [2024-11-20 05:51:45.024716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.262 [2024-11-20 05:51:45.024747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.262 [2024-11-20 05:51:45.024778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.262 [2024-11-20 05:51:45.024809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.262 [2024-11-20 05:51:45.024840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.262 [2024-11-20 05:51:45.024871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:45.024889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.262 [2024-11-20 05:51:45.024903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:12.262 10626.56 IOPS, 41.51 MiB/s [2024-11-20T05:52:32.181Z] 10570.40 IOPS, 41.29 MiB/s [2024-11-20T05:52:32.181Z] 10531.82 IOPS, 41.14 MiB/s [2024-11-20T05:52:32.181Z] 10497.92 IOPS, 41.01 MiB/s [2024-11-20T05:52:32.181Z] 10471.77 IOPS, 40.91 MiB/s [2024-11-20T05:52:32.181Z] 10441.79 IOPS, 40.79 MiB/s [2024-11-20T05:52:32.181Z] [2024-11-20 05:51:51.668808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.262 [2024-11-20 05:51:51.668867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:51.668922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.262 [2024-11-20 05:51:51.668946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:51.668965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.262 [2024-11-20 05:51:51.668979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:51.668998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.262 [2024-11-20 05:51:51.669011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:51.669030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.262 [2024-11-20 05:51:51.669043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:51.669062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.262 [2024-11-20 05:51:51.669076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:51.669095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.262 [2024-11-20 05:51:51.669108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:12.262 [2024-11-20 05:51:51.669127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.263 [2024-11-20 05:51:51.669140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.669971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.669990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.670004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.670022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.670036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.670055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.670069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.670088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.670102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.670121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.670134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.670158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.670172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.670191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.670205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.670224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.670238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.670256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.670270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.670288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.670302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.670321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.670334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.670353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.670366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:41:12.263 [2024-11-20 05:51:51.670386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.263 [2024-11-20 05:51:51.670399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.264 [2024-11-20 05:51:51.670433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.264 [2024-11-20 05:51:51.670475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.264 [2024-11-20 05:51:51.670507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.264 [2024-11-20 05:51:51.670540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.264 [2024-11-20 05:51:51.670578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.264 [2024-11-20 05:51:51.670614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.264 [2024-11-20 05:51:51.670646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.264 [2024-11-20 05:51:51.670679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.264 [2024-11-20 05:51:51.670711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.264 [2024-11-20 05:51:51.670743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.670776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.670808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.670840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.670872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.670904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.670936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.670955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.670973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.264 [2024-11-20 05:51:51.671913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:12.264 [2024-11-20 05:51:51.671935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.671949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.671970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.671984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.265 [2024-11-20 05:51:51.672929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.265 [2024-11-20 05:51:51.672965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.672988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.265 [2024-11-20 05:51:51.673001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.673023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.265 [2024-11-20 05:51:51.673036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:41:12.265 [2024-11-20 05:51:51.673060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.265 [2024-11-20 05:51:51.673073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:51.673095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.266 [2024-11-20 05:51:51.673109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:51.673131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.266 [2024-11-20 05:51:51.673144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:51.673166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.266 [2024-11-20 05:51:51.673180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:51.673201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.266 [2024-11-20 05:51:51.673214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:51.673236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.266 [2024-11-20 05:51:51.673250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:51.673272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.266 [2024-11-20 05:51:51.673286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:51.673307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.266 [2024-11-20 05:51:51.673321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:51.673343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.266 [2024-11-20 05:51:51.673361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:51.673383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.266 [2024-11-20 05:51:51.673396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:51.673418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.266 [2024-11-20 05:51:51.673431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:51.673460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.266 [2024-11-20 05:51:51.673474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:51.673656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.266 [2024-11-20 05:51:51.673676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:41:12.266 10318.27 IOPS, 40.31 MiB/s [2024-11-20T05:52:32.185Z] 9793.94 IOPS, 38.26 MiB/s [2024-11-20T05:52:32.185Z] 9892.47 IOPS, 38.64 MiB/s [2024-11-20T05:52:32.185Z] 9977.72 IOPS, 38.98 MiB/s [2024-11-20T05:52:32.185Z] 10053.32 IOPS, 39.27 MiB/s [2024-11-20T05:52:32.185Z] 10117.05 IOPS, 39.52 MiB/s [2024-11-20T05:52:32.185Z] 10177.57 IOPS, 39.76 MiB/s [2024-11-20T05:52:32.185Z] [2024-11-20 05:51:58.587250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.587349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.587385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.587418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.587460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.587494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.587526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.587580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.587613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.587645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.587677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.587709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.587742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.587774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.587806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.587838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.587852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.588002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.588021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.588043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.588057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.588077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.588090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.588110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.588146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.588166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.588179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.588199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.588212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.588233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.588246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.588266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.588280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.588299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.266 [2024-11-20 05:51:58.588313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:12.266 [2024-11-20 05:51:58.588333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.588983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.588997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.267 [2024-11-20 05:51:58.589740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:41:12.267 [2024-11-20 05:51:58.589802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.268 [2024-11-20 05:51:58.589818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.589840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.268 [2024-11-20 05:51:58.589854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.589876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.268 [2024-11-20 05:51:58.589890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.589912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.268 [2024-11-20 05:51:58.589925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.589947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.268 [2024-11-20 05:51:58.589966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.589988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.268 [2024-11-20 05:51:58.590001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.268 [2024-11-20 05:51:58.590037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.590967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.590981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.591004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.591018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.591040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.591053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.591074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.591088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.591111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.591124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.591146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.268 [2024-11-20 05:51:58.591159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:41:12.268 [2024-11-20 05:51:58.591181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.268 [2024-11-20 05:51:58.591195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:51:58.591793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.269 [2024-11-20 05:51:58.591932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.269 [2024-11-20 05:51:58.591971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.591995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.269 [2024-11-20 05:51:58.592009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.592033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.269 [2024-11-20 05:51:58.592047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.592072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.269 [2024-11-20 05:51:58.592086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.592112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.269 [2024-11-20 05:51:58.592126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.592150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.269 [2024-11-20 05:51:58.592164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.592189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.269 [2024-11-20 05:51:58.592202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:51:58.592227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.269 [2024-11-20 05:51:58.592240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:41:12.269 10101.77 IOPS, 39.46 MiB/s [2024-11-20T05:52:32.188Z] 9662.57 IOPS, 37.74 MiB/s [2024-11-20T05:52:32.188Z] 9259.96 IOPS, 36.17 MiB/s [2024-11-20T05:52:32.188Z] 8889.56 IOPS, 34.72 MiB/s [2024-11-20T05:52:32.188Z] 8547.65 IOPS, 33.39 MiB/s [2024-11-20T05:52:32.188Z] 8231.07 IOPS, 32.15 MiB/s [2024-11-20T05:52:32.188Z] 7937.11 IOPS, 31.00 MiB/s [2024-11-20T05:52:32.188Z] 7752.83 IOPS, 30.28 MiB/s [2024-11-20T05:52:32.188Z] 7852.80 IOPS, 30.68 MiB/s [2024-11-20T05:52:32.188Z] 7945.45 IOPS, 31.04 MiB/s [2024-11-20T05:52:32.188Z] 8033.62 IOPS, 31.38 MiB/s [2024-11-20T05:52:32.188Z] 8113.42 IOPS, 31.69 MiB/s [2024-11-20T05:52:32.188Z] 8191.12 IOPS, 32.00 MiB/s [2024-11-20T05:52:32.188Z] [2024-11-20 05:52:11.679669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:52:11.679715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:52:11.679739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:52:11.679753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:52:11.679767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:52:11.679780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:52:11.679795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:52:11.679808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:52:11.679822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:52:11.679835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:52:11.679849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:52:11.679862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:52:11.679876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:52:11.679888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:52:11.679902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:52:11.679915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:52:11.679929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:52:11.679942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:52:11.679956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:52:11.679969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:52:11.679983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.269 [2024-11-20 05:52:11.679995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.269 [2024-11-20 05:52:11.680009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.270 [2024-11-20 05:52:11.680783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.270 [2024-11-20 05:52:11.680815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.270 [2024-11-20 05:52:11.680842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.270 [2024-11-20 05:52:11.680869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.270 [2024-11-20 05:52:11.680898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.270 [2024-11-20 05:52:11.680925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.270 [2024-11-20 05:52:11.680952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.270 [2024-11-20 05:52:11.680979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.680993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.270 [2024-11-20 05:52:11.681006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.270 [2024-11-20 05:52:11.681020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.681983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.681997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.682010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.682024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.682037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.682051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.682064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.682078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.271 [2024-11-20 05:52:11.682091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.682105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.271 [2024-11-20 05:52:11.682118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.271 [2024-11-20 05:52:11.682132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.272 [2024-11-20 05:52:11.682145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.272 [2024-11-20 05:52:11.682172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.272 [2024-11-20 05:52:11.682203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.272 [2024-11-20 05:52:11.682232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.272 [2024-11-20 05:52:11.682259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.272 [2024-11-20 05:52:11.682287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.272 [2024-11-20 05:52:11.682314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:12.272 [2024-11-20 05:52:11.682341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.682982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.682995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.683009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.683022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.683036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.683049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.683063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.683076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.683090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.683102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.683117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.683130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.683145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.683158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.683172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.683184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.683199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.683211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.683226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:12.272 [2024-11-20 05:52:11.683238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.272 [2024-11-20 05:52:11.683270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:41:12.273 [2024-11-20 05:52:11.683286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70704 len:8 PRP1 0x0 PRP2 0x0 00:41:12.273 [2024-11-20 05:52:11.683298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.273 [2024-11-20 05:52:11.683325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:41:12.273 [2024-11-20 05:52:11.683335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:41:12.273 [2024-11-20 05:52:11.683346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70712 len:8 PRP1 0x0 PRP2 0x0 00:41:12.273 [2024-11-20 05:52:11.683359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:12.273 [2024-11-20 05:52:11.684500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:41:12.273 [2024-11-20 05:52:11.684571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba2190 (9): Bad file descriptor 00:41:12.273 [2024-11-20 05:52:11.684683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:12.273 [2024-11-20 05:52:11.684706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba2190 with addr=10.0.0.3, port=4421 00:41:12.273 [2024-11-20 05:52:11.684720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba2190 is same with the state(6) to be set 00:41:12.273 [2024-11-20 05:52:11.684742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba2190 (9): Bad file descriptor 00:41:12.273 [2024-11-20 05:52:11.684765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:41:12.273 [2024-11-20 05:52:11.684779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:41:12.273 [2024-11-20 05:52:11.684793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:41:12.273 [2024-11-20 05:52:11.684810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:41:12.273 [2024-11-20 05:52:11.684824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:41:12.273 8253.40 IOPS, 32.24 MiB/s [2024-11-20T05:52:32.192Z] 8309.44 IOPS, 32.46 MiB/s [2024-11-20T05:52:32.192Z] 8355.43 IOPS, 32.64 MiB/s [2024-11-20T05:52:32.192Z] 8399.08 IOPS, 32.81 MiB/s [2024-11-20T05:52:32.192Z] 8446.49 IOPS, 32.99 MiB/s [2024-11-20T05:52:32.192Z] 8491.50 IOPS, 33.17 MiB/s [2024-11-20T05:52:32.192Z] 8537.88 IOPS, 33.35 MiB/s [2024-11-20T05:52:32.192Z] 8576.36 IOPS, 33.50 MiB/s [2024-11-20T05:52:32.192Z] 8611.95 IOPS, 33.64 MiB/s [2024-11-20T05:52:32.192Z] 8647.09 IOPS, 33.78 MiB/s [2024-11-20T05:52:32.192Z] [2024-11-20 05:52:21.767172] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:41:12.273 8681.09 IOPS, 33.91 MiB/s [2024-11-20T05:52:32.192Z] 8719.63 IOPS, 34.06 MiB/s [2024-11-20T05:52:32.192Z] 8759.30 IOPS, 34.22 MiB/s [2024-11-20T05:52:32.192Z] 8793.00 IOPS, 34.35 MiB/s [2024-11-20T05:52:32.192Z] 8823.02 IOPS, 34.46 MiB/s [2024-11-20T05:52:32.192Z] 8850.38 IOPS, 34.57 MiB/s [2024-11-20T05:52:32.192Z] 8877.16 IOPS, 34.68 MiB/s [2024-11-20T05:52:32.192Z] 8903.48 IOPS, 34.78 MiB/s [2024-11-20T05:52:32.192Z] 8930.91 IOPS, 34.89 MiB/s [2024-11-20T05:52:32.192Z] 8954.89 IOPS, 34.98 MiB/s [2024-11-20T05:52:32.192Z] 8978.82 IOPS, 35.07 MiB/s [2024-11-20T05:52:32.192Z] Received shutdown signal, test time was about 55.078402 seconds 00:41:12.273 00:41:12.273 Latency(us) 00:41:12.273 [2024-11-20T05:52:32.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:12.273 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:41:12.273 Verification LBA range: start 0x0 length 0x4000 00:41:12.273 Nvme0n1 : 55.08 8979.21 35.08 0.00 0.00 14236.16 963.54 7030452.42 00:41:12.273 [2024-11-20T05:52:32.192Z] =================================================================================================================== 00:41:12.273 [2024-11-20T05:52:32.192Z] Total : 8979.21 35.08 0.00 0.00 14236.16 963.54 7030452.42 00:41:12.273 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:12.532 rmmod nvme_tcp 00:41:12.532 rmmod nvme_fabrics 00:41:12.532 rmmod nvme_keyring 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 96014 ']' 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 96014 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 96014 ']' 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 96014 00:41:12.532 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:41:12.792 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:12.792 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 96014 00:41:12.792 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:12.792 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:12.792 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 96014' 00:41:12.792 killing process with pid 96014 00:41:12.792 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 96014 00:41:12.792 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 96014 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:41:13.051 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:41:13.310 05:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:41:13.310 05:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:13.310 05:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:13.310 05:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:41:13.310 05:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:13.310 05:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:13.310 05:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:13.310 05:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:41:13.310 00:41:13.310 real 1m0.875s 00:41:13.310 user 2m47.015s 00:41:13.310 sys 0m17.624s 00:41:13.310 05:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:13.310 05:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:13.310 ************************************ 00:41:13.310 END TEST nvmf_host_multipath 00:41:13.310 ************************************ 00:41:13.310 05:52:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:41:13.310 05:52:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:41:13.310 05:52:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:13.310 05:52:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:13.310 ************************************ 00:41:13.310 START TEST nvmf_timeout 00:41:13.310 ************************************ 00:41:13.310 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:41:13.570 * Looking for test storage... 00:41:13.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:41:13.570 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:13.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.571 --rc genhtml_branch_coverage=1 00:41:13.571 --rc genhtml_function_coverage=1 00:41:13.571 --rc genhtml_legend=1 00:41:13.571 --rc geninfo_all_blocks=1 00:41:13.571 --rc geninfo_unexecuted_blocks=1 00:41:13.571 00:41:13.571 ' 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:13.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.571 --rc genhtml_branch_coverage=1 00:41:13.571 --rc genhtml_function_coverage=1 00:41:13.571 --rc genhtml_legend=1 00:41:13.571 --rc geninfo_all_blocks=1 00:41:13.571 --rc geninfo_unexecuted_blocks=1 00:41:13.571 00:41:13.571 ' 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:13.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.571 --rc genhtml_branch_coverage=1 00:41:13.571 --rc genhtml_function_coverage=1 00:41:13.571 --rc genhtml_legend=1 00:41:13.571 --rc geninfo_all_blocks=1 00:41:13.571 --rc geninfo_unexecuted_blocks=1 00:41:13.571 00:41:13.571 ' 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:13.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.571 --rc genhtml_branch_coverage=1 00:41:13.571 --rc genhtml_function_coverage=1 00:41:13.571 --rc genhtml_legend=1 00:41:13.571 --rc geninfo_all_blocks=1 00:41:13.571 --rc geninfo_unexecuted_blocks=1 00:41:13.571 00:41:13.571 ' 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:13.571 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:13.571 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:41:13.572 Cannot find device "nvmf_init_br" 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:41:13.572 Cannot find device "nvmf_init_br2" 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:41:13.572 Cannot find device "nvmf_tgt_br" 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:41:13.572 Cannot find device "nvmf_tgt_br2" 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:41:13.572 Cannot find device "nvmf_init_br" 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:41:13.572 Cannot find device "nvmf_init_br2" 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:41:13.572 Cannot find device "nvmf_tgt_br" 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:41:13.572 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:41:13.831 Cannot find device "nvmf_tgt_br2" 00:41:13.831 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:41:13.831 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:41:13.831 Cannot find device "nvmf_br" 00:41:13.831 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:41:13.831 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:41:13.831 Cannot find device "nvmf_init_if" 00:41:13.831 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:41:13.831 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:41:13.831 Cannot find device "nvmf_init_if2" 00:41:13.831 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:41:13.831 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:13.831 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:13.831 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:41:13.831 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:13.831 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:13.831 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:41:13.832 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:41:14.091 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:41:14.091 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:14.091 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:14.091 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:14.091 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:41:14.091 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:41:14.091 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:41:14.091 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:41:14.091 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:14.091 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:14.091 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:14.091 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:41:14.092 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:14.092 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:41:14.092 00:41:14.092 --- 10.0.0.3 ping statistics --- 00:41:14.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:14.092 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:41:14.092 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:41:14.092 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:41:14.092 00:41:14.092 --- 10.0.0.4 ping statistics --- 00:41:14.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:14.092 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:14.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:14.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:41:14.092 00:41:14.092 --- 10.0.0.1 ping statistics --- 00:41:14.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:14.092 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:41:14.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:14.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:41:14.092 00:41:14.092 --- 10.0.0.2 ping statistics --- 00:41:14.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:14.092 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=97428 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 97428 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 97428 ']' 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:14.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:14.092 05:52:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:41:14.092 [2024-11-20 05:52:33.966984] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:41:14.092 [2024-11-20 05:52:33.967084] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:14.351 [2024-11-20 05:52:34.118132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:14.351 [2024-11-20 05:52:34.192880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:14.351 [2024-11-20 05:52:34.192936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:14.351 [2024-11-20 05:52:34.192946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:14.351 [2024-11-20 05:52:34.192954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:14.351 [2024-11-20 05:52:34.192961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:14.351 [2024-11-20 05:52:34.194361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:14.351 [2024-11-20 05:52:34.194373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:15.288 05:52:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:15.288 05:52:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:41:15.288 05:52:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:15.288 05:52:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:15.288 05:52:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:41:15.288 05:52:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:15.288 05:52:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:15.288 05:52:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:15.546 [2024-11-20 05:52:35.298775] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:15.546 05:52:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:41:15.804 Malloc0 00:41:15.804 05:52:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:16.063 05:52:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:16.322 05:52:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:41:16.582 [2024-11-20 05:52:36.261811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:41:16.582 05:52:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:41:16.582 05:52:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=97519 00:41:16.582 05:52:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 97519 /var/tmp/bdevperf.sock 00:41:16.582 05:52:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 97519 ']' 00:41:16.582 05:52:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:16.582 05:52:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:16.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:16.582 05:52:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:16.582 05:52:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:16.582 05:52:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:41:16.582 [2024-11-20 05:52:36.318022] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:41:16.582 [2024-11-20 05:52:36.318097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97519 ] 00:41:16.582 [2024-11-20 05:52:36.462596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:16.841 [2024-11-20 05:52:36.517750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:16.841 05:52:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:16.841 05:52:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:41:16.841 05:52:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:41:17.100 05:52:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:41:17.359 NVMe0n1 00:41:17.359 05:52:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=97552 00:41:17.359 05:52:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:17.359 05:52:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:41:17.618 Running I/O for 10 seconds... 00:41:18.555 05:52:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:41:18.818 10379.00 IOPS, 40.54 MiB/s [2024-11-20T05:52:38.737Z] [2024-11-20 05:52:38.497262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.818 [2024-11-20 05:52:38.497601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.497995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.819 [2024-11-20 05:52:38.498163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.820 [2024-11-20 05:52:38.498175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.820 [2024-11-20 05:52:38.498183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.820 [2024-11-20 05:52:38.498191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.820 [2024-11-20 05:52:38.498199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.820 [2024-11-20 05:52:38.498208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.820 [2024-11-20 05:52:38.498216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.820 [2024-11-20 05:52:38.498224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.820 [2024-11-20 05:52:38.498233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.820 [2024-11-20 05:52:38.498240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.820 [2024-11-20 05:52:38.498248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.820 [2024-11-20 05:52:38.498257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.820 [2024-11-20 05:52:38.498264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455510 is same with the state(6) to be set 00:41:18.820 [2024-11-20 05:52:38.500102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.820 [2024-11-20 05:52:38.500614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.820 [2024-11-20 05:52:38.500624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.500988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.500997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.821 [2024-11-20 05:52:38.501288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.821 [2024-11-20 05:52:38.501297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.822 [2024-11-20 05:52:38.501316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.822 [2024-11-20 05:52:38.501334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:18.822 [2024-11-20 05:52:38.501354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.822 [2024-11-20 05:52:38.501934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.822 [2024-11-20 05:52:38.501944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.501952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.501962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.501976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.501986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.501995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.823 [2024-11-20 05:52:38.502472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.823 [2024-11-20 05:52:38.502481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.824 [2024-11-20 05:52:38.502491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.824 [2024-11-20 05:52:38.502500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.824 [2024-11-20 05:52:38.502510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.824 [2024-11-20 05:52:38.502519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.824 [2024-11-20 05:52:38.502529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.824 [2024-11-20 05:52:38.502538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.824 [2024-11-20 05:52:38.502548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.824 [2024-11-20 05:52:38.502558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.824 [2024-11-20 05:52:38.502568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:18.824 [2024-11-20 05:52:38.502577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.824 [2024-11-20 05:52:38.502603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:41:18.824 [2024-11-20 05:52:38.502612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:41:18.824 [2024-11-20 05:52:38.502620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94744 len:8 PRP1 0x0 PRP2 0x0 00:41:18.824 [2024-11-20 05:52:38.502629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.824 [2024-11-20 05:52:38.502741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:18.824 [2024-11-20 05:52:38.502753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.824 [2024-11-20 05:52:38.502763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:18.824 [2024-11-20 05:52:38.502772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.824 [2024-11-20 05:52:38.502781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:18.824 [2024-11-20 05:52:38.502790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.824 [2024-11-20 05:52:38.502799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:18.824 [2024-11-20 05:52:38.502808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:18.824 [2024-11-20 05:52:38.502817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfddf50 is same with the state(6) to be set 00:41:18.824 [2024-11-20 05:52:38.503003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:41:18.824 [2024-11-20 05:52:38.503019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfddf50 (9): Bad file descriptor 00:41:18.824 [2024-11-20 05:52:38.503095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:18.824 [2024-11-20 05:52:38.503110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfddf50 with addr=10.0.0.3, port=4420 00:41:18.824 [2024-11-20 05:52:38.503119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfddf50 is same with the state(6) to be set 00:41:18.824 [2024-11-20 05:52:38.503132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfddf50 (9): Bad file descriptor 00:41:18.824 [2024-11-20 05:52:38.503146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:41:18.824 [2024-11-20 05:52:38.503154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:41:18.824 [2024-11-20 05:52:38.503165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:41:18.824 [2024-11-20 05:52:38.503175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:41:18.824 [2024-11-20 05:52:38.503184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:41:18.824 05:52:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:41:20.700 5858.00 IOPS, 22.88 MiB/s [2024-11-20T05:52:40.619Z] 3905.33 IOPS, 15.26 MiB/s [2024-11-20T05:52:40.619Z] [2024-11-20 05:52:40.503443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:20.700 [2024-11-20 05:52:40.503489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfddf50 with addr=10.0.0.3, port=4420 00:41:20.700 [2024-11-20 05:52:40.503502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfddf50 is same with the state(6) to be set 00:41:20.700 [2024-11-20 05:52:40.503523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfddf50 (9): Bad file descriptor 00:41:20.700 [2024-11-20 05:52:40.503540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:41:20.700 [2024-11-20 05:52:40.503550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:41:20.700 [2024-11-20 05:52:40.503562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:41:20.700 [2024-11-20 05:52:40.503573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:41:20.700 [2024-11-20 05:52:40.503584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:41:20.700 05:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:41:20.700 05:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:41:20.700 05:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:41:20.959 05:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:41:20.959 05:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:41:20.959 05:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:41:20.959 05:52:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:41:21.218 05:52:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:41:21.218 05:52:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:41:22.859 2929.00 IOPS, 11.44 MiB/s [2024-11-20T05:52:42.778Z] 2343.20 IOPS, 9.15 MiB/s [2024-11-20T05:52:42.778Z] [2024-11-20 05:52:42.503830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.859 [2024-11-20 05:52:42.503869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfddf50 with addr=10.0.0.3, port=4420 00:41:22.859 [2024-11-20 05:52:42.503882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfddf50 is same with the state(6) to be set 00:41:22.859 [2024-11-20 05:52:42.503902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfddf50 (9): Bad file descriptor 00:41:22.859 [2024-11-20 05:52:42.503926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:41:22.859 [2024-11-20 05:52:42.503936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:41:22.859 [2024-11-20 05:52:42.503947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:41:22.859 [2024-11-20 05:52:42.503957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:41:22.859 [2024-11-20 05:52:42.503967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:41:24.732 1952.67 IOPS, 7.63 MiB/s [2024-11-20T05:52:44.651Z] 1673.71 IOPS, 6.54 MiB/s [2024-11-20T05:52:44.651Z] [2024-11-20 05:52:44.504108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:41:24.732 [2024-11-20 05:52:44.504142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:41:24.732 [2024-11-20 05:52:44.504153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:41:24.732 [2024-11-20 05:52:44.504162] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:41:24.732 [2024-11-20 05:52:44.504174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:41:25.668 1464.50 IOPS, 5.72 MiB/s 00:41:25.669 Latency(us) 00:41:25.669 [2024-11-20T05:52:45.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:25.669 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:41:25.669 Verification LBA range: start 0x0 length 0x4000 00:41:25.669 NVMe0n1 : 8.14 1440.11 5.63 15.73 0.00 87932.56 1833.45 7030452.42 00:41:25.669 [2024-11-20T05:52:45.588Z] =================================================================================================================== 00:41:25.669 [2024-11-20T05:52:45.588Z] Total : 1440.11 5.63 15.73 0.00 87932.56 1833.45 7030452.42 00:41:25.669 { 00:41:25.669 "results": [ 00:41:25.669 { 00:41:25.669 "job": "NVMe0n1", 00:41:25.669 "core_mask": "0x4", 00:41:25.669 "workload": "verify", 00:41:25.669 "status": "finished", 00:41:25.669 "verify_range": { 00:41:25.669 "start": 0, 00:41:25.669 "length": 16384 00:41:25.669 }, 00:41:25.669 "queue_depth": 128, 00:41:25.669 "io_size": 4096, 00:41:25.669 "runtime": 8.13548, 00:41:25.669 "iops": 1440.1117082212727, 00:41:25.669 "mibps": 5.6254363602393465, 00:41:25.669 "io_failed": 128, 00:41:25.669 "io_timeout": 0, 00:41:25.669 "avg_latency_us": 87932.55891799746, 00:41:25.669 "min_latency_us": 1833.4476190476191, 00:41:25.669 "max_latency_us": 7030452.419047619 00:41:25.669 } 00:41:25.669 ], 00:41:25.669 "core_count": 1 00:41:25.669 } 00:41:26.236 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:41:26.236 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:41:26.236 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:41:26.495 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:41:26.495 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:41:26.495 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:41:26.495 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:41:26.754 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:41:26.754 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 97552 00:41:26.754 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 97519 00:41:26.754 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 97519 ']' 00:41:26.754 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 97519 00:41:26.754 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:41:26.754 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:26.754 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97519 00:41:27.012 killing process with pid 97519 00:41:27.012 Received shutdown signal, test time was about 9.330468 seconds 00:41:27.012 00:41:27.012 Latency(us) 00:41:27.012 [2024-11-20T05:52:46.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:27.012 [2024-11-20T05:52:46.931Z] =================================================================================================================== 00:41:27.012 [2024-11-20T05:52:46.931Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:27.012 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:41:27.012 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:41:27.012 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97519' 00:41:27.012 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 97519 00:41:27.012 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 97519 00:41:27.012 05:52:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:41:27.271 [2024-11-20 05:52:47.044099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:41:27.271 05:52:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=97712 00:41:27.271 05:52:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:41:27.271 05:52:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 97712 /var/tmp/bdevperf.sock 00:41:27.271 05:52:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 97712 ']' 00:41:27.271 05:52:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:27.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:27.271 05:52:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:27.271 05:52:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:27.271 05:52:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:27.271 05:52:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:41:27.271 [2024-11-20 05:52:47.126358] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:41:27.271 [2024-11-20 05:52:47.126485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97712 ] 00:41:27.530 [2024-11-20 05:52:47.269290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:27.530 [2024-11-20 05:52:47.311241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:28.467 05:52:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:28.467 05:52:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:41:28.467 05:52:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:41:28.467 05:52:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:41:28.726 NVMe0n1 00:41:28.985 05:52:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:28.985 05:52:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=97754 00:41:28.985 05:52:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:41:28.985 Running I/O for 10 seconds... 00:41:29.919 05:52:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:41:30.180 10480.00 IOPS, 40.94 MiB/s [2024-11-20T05:52:50.099Z] [2024-11-20 05:52:49.845038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.180 [2024-11-20 05:52:49.845098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.180 [2024-11-20 05:52:49.845115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.180 [2024-11-20 05:52:49.845125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.180 [2024-11-20 05:52:49.845137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.180 [2024-11-20 05:52:49.845146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.180 [2024-11-20 05:52:49.845157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.180 [2024-11-20 05:52:49.845166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.180 [2024-11-20 05:52:49.845176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.180 [2024-11-20 05:52:49.845185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.180 [2024-11-20 05:52:49.845196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.180 [2024-11-20 05:52:49.845205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.181 [2024-11-20 05:52:49.845225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.181 [2024-11-20 05:52:49.845244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.181 [2024-11-20 05:52:49.845263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.845975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.845985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.181 [2024-11-20 05:52:49.846785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.181 [2024-11-20 05:52:49.846796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.182 [2024-11-20 05:52:49.846805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.846816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.846825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.846835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.846844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.846854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.846945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.846958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.846966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.846976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.846985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.846995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.847943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.847954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.848365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.848383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.848393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.848404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.848413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.848423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.848432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.848442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.848461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.848472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.848481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.848491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.848500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.848613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.848626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.848637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.848646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.848784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.848800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.849027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.849038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.849048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.182 [2024-11-20 05:52:49.849057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.182 [2024-11-20 05:52:49.849067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.183 [2024-11-20 05:52:49.849076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.849087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.183 [2024-11-20 05:52:49.849096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.849106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.183 [2024-11-20 05:52:49.849214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.849227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.183 [2024-11-20 05:52:49.849236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.849369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.183 [2024-11-20 05:52:49.849384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.849526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:30.183 [2024-11-20 05:52:49.849543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.849680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.849695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.849827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.849841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.849853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.849952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.849964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.849973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.849983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.849992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.850094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.850115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.850134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.850153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.850172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.850190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.850508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.850528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.850548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.850568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.850587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.850710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.850828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.850852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.850871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.850991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.851001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.851011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.851124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.851137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.851145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.851269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.851282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.851293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.851309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.851319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.851328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.851338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.851624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.851635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.851645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.851773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.851786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.851797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.851805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.851924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:30.183 [2024-11-20 05:52:49.851936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.851946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12637b0 is same with the state(6) to be set 00:41:30.183 [2024-11-20 05:52:49.852061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:41:30.183 [2024-11-20 05:52:49.852070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:41:30.183 [2024-11-20 05:52:49.852078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94024 len:8 PRP1 0x0 PRP2 0x0 00:41:30.183 [2024-11-20 05:52:49.852087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.852415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:30.183 [2024-11-20 05:52:49.852438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.852459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:30.183 [2024-11-20 05:52:49.852468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.852478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:30.183 [2024-11-20 05:52:49.852486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.183 [2024-11-20 05:52:49.852496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:30.183 [2024-11-20 05:52:49.852505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:30.184 [2024-11-20 05:52:49.852513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7f50 is same with the state(6) to be set 00:41:30.184 [2024-11-20 05:52:49.852977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:41:30.184 [2024-11-20 05:52:49.853002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7f50 (9): Bad file descriptor 00:41:30.184 [2024-11-20 05:52:49.853176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:30.184 [2024-11-20 05:52:49.853200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f7f50 with addr=10.0.0.3, port=4420 00:41:30.184 [2024-11-20 05:52:49.853211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7f50 is same with the state(6) to be set 00:41:30.184 [2024-11-20 05:52:49.853295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7f50 (9): Bad file descriptor 00:41:30.184 [2024-11-20 05:52:49.853310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:41:30.184 [2024-11-20 05:52:49.853320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:41:30.184 [2024-11-20 05:52:49.853330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:41:30.184 [2024-11-20 05:52:49.853339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:41:30.184 [2024-11-20 05:52:49.853349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:41:30.184 05:52:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:41:31.121 5841.00 IOPS, 22.82 MiB/s [2024-11-20T05:52:51.040Z] [2024-11-20 05:52:50.853682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:31.121 [2024-11-20 05:52:50.853723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f7f50 with addr=10.0.0.3, port=4420 00:41:31.121 [2024-11-20 05:52:50.853751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7f50 is same with the state(6) to be set 00:41:31.121 [2024-11-20 05:52:50.853769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7f50 (9): Bad file descriptor 00:41:31.121 [2024-11-20 05:52:50.853784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:41:31.121 [2024-11-20 05:52:50.853794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:41:31.121 [2024-11-20 05:52:50.853804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:41:31.121 [2024-11-20 05:52:50.853814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:41:31.121 [2024-11-20 05:52:50.853824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:41:31.121 05:52:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:41:31.380 [2024-11-20 05:52:51.118808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:41:31.380 05:52:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 97754 00:41:32.208 3894.00 IOPS, 15.21 MiB/s [2024-11-20T05:52:52.127Z] [2024-11-20 05:52:51.871099] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:41:34.084 2920.50 IOPS, 11.41 MiB/s [2024-11-20T05:52:54.941Z] 4304.60 IOPS, 16.81 MiB/s [2024-11-20T05:52:55.878Z] 5433.00 IOPS, 21.22 MiB/s [2024-11-20T05:52:56.817Z] 6253.86 IOPS, 24.43 MiB/s [2024-11-20T05:52:57.756Z] 6875.75 IOPS, 26.86 MiB/s [2024-11-20T05:52:59.137Z] 7338.78 IOPS, 28.67 MiB/s [2024-11-20T05:52:59.137Z] 7716.80 IOPS, 30.14 MiB/s 00:41:39.218 Latency(us) 00:41:39.218 [2024-11-20T05:52:59.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:39.218 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:41:39.218 Verification LBA range: start 0x0 length 0x4000 00:41:39.218 NVMe0n1 : 10.01 7721.58 30.16 0.00 0.00 16543.27 1521.37 3019898.88 00:41:39.218 [2024-11-20T05:52:59.137Z] =================================================================================================================== 00:41:39.218 [2024-11-20T05:52:59.137Z] Total : 7721.58 30.16 0.00 0.00 16543.27 1521.37 3019898.88 00:41:39.218 { 00:41:39.218 "results": [ 00:41:39.218 { 00:41:39.218 "job": "NVMe0n1", 00:41:39.218 "core_mask": "0x4", 00:41:39.218 "workload": "verify", 00:41:39.218 "status": "finished", 00:41:39.218 "verify_range": { 00:41:39.218 "start": 0, 00:41:39.218 "length": 16384 00:41:39.218 }, 00:41:39.218 "queue_depth": 128, 00:41:39.218 "io_size": 4096, 00:41:39.218 "runtime": 10.006372, 00:41:39.218 "iops": 7721.579809345485, 00:41:39.218 "mibps": 30.1624211302558, 00:41:39.218 "io_failed": 0, 00:41:39.218 "io_timeout": 0, 00:41:39.218 "avg_latency_us": 16543.272505545232, 00:41:39.218 "min_latency_us": 1521.3714285714286, 00:41:39.218 "max_latency_us": 3019898.88 00:41:39.218 } 00:41:39.218 ], 00:41:39.218 "core_count": 1 00:41:39.218 } 00:41:39.218 05:52:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97875 00:41:39.218 05:52:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:39.218 05:52:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:41:39.218 Running I/O for 10 seconds... 00:41:40.157 05:52:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:41:40.157 10508.00 IOPS, 41.05 MiB/s [2024-11-20T05:53:00.076Z] [2024-11-20 05:52:59.996049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abd40 is same with the state(6) to be set 00:41:40.157 [2024-11-20 05:52:59.996097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abd40 is same with the state(6) to be set 00:41:40.157 [2024-11-20 05:52:59.996107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abd40 is same with the state(6) to be set 00:41:40.157 [2024-11-20 05:52:59.996115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abd40 is same with the state(6) to be set 00:41:40.157 [2024-11-20 05:52:59.996124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abd40 is same with the state(6) to be set 00:41:40.157 [2024-11-20 05:52:59.996132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abd40 is same with the state(6) to be set 00:41:40.157 [2024-11-20 05:52:59.996140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abd40 is same with the state(6) to be set 00:41:40.157 [2024-11-20 05:52:59.996148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14abd40 is same with the state(6) to be set 00:41:40.157 [2024-11-20 05:52:59.996718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.157 [2024-11-20 05:52:59.996752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.996771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.157 [2024-11-20 05:52:59.996781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.996792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.157 [2024-11-20 05:52:59.996801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.996811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.157 [2024-11-20 05:52:59.996820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.996831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.157 [2024-11-20 05:52:59.996839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.996849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.157 [2024-11-20 05:52:59.996858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.996868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.157 [2024-11-20 05:52:59.996876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.996886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.157 [2024-11-20 05:52:59.996895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.997063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.157 [2024-11-20 05:52:59.997074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.997428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.157 [2024-11-20 05:52:59.997455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.997466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.157 [2024-11-20 05:52:59.997475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.997486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.157 [2024-11-20 05:52:59.997494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.997505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.157 [2024-11-20 05:52:59.997514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.997524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.157 [2024-11-20 05:52:59.997532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.997542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.157 [2024-11-20 05:52:59.997551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.997561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.157 [2024-11-20 05:52:59.997569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.157 [2024-11-20 05:52:59.997579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.997589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.997599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.158 [2024-11-20 05:52:59.997608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.997618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.158 [2024-11-20 05:52:59.998055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.158 [2024-11-20 05:52:59.998085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.158 [2024-11-20 05:52:59.998104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.158 [2024-11-20 05:52:59.998123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.158 [2024-11-20 05:52:59.998142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.158 [2024-11-20 05:52:59.998161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.158 [2024-11-20 05:52:59.998179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.158 [2024-11-20 05:52:59.998198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.998216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.998347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.998860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.998883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.998903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.998922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.998942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.998961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.998979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.998989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.999112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.999123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.999132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.999142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.999442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.999465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.999474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.999484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.999492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.999504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.999513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.999523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.999532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.999541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.999550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.999560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.999709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:52:59.999848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:52:59.999861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:53:00.000004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:53:00.000146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:53:00.000242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:53:00.000252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:53:00.000263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:53:00.000272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:53:00.000282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:53:00.000292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:53:00.000303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:53:00.000312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:53:00.000322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:53:00.000331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:53:00.000341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:53:00.000350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:53:00.000360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:53:00.000369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:53:00.000379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:53:00.000388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:53:00.000398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:53:00.000406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.158 [2024-11-20 05:53:00.000416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.158 [2024-11-20 05:53:00.000425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.000990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.000999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.001976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.001993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.002010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.002026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.002044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.002061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.002078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.002094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.002111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.002125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.159 [2024-11-20 05:53:00.002144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.159 [2024-11-20 05:53:00.002161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.002176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.160 [2024-11-20 05:53:00.002189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.002204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:41:40.160 [2024-11-20 05:53:00.002217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.002232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.002245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.002260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.002273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.002288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.002301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.002316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.002333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.002350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.002363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.002382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.002398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.002417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.002433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.003151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.003264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.003284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.003310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.003327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.003344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.003363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.003764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.003779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.003801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.003828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.003839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.003851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.004087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.004107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.004119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.004131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.004141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.004153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.004163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.004175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.004184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.004429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.004441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.004467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.004478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.004490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.004500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.004513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.004523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.004536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.004652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.004666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.004676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.004689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.004699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.004834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.004945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.004959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.004969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.005082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.005097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.005109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.005119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.005243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:40.160 [2024-11-20 05:53:00.005258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.005377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:41:40.160 [2024-11-20 05:53:00.005390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:41:40.160 [2024-11-20 05:53:00.005398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93520 len:8 PRP1 0x0 PRP2 0x0 00:41:40.160 [2024-11-20 05:53:00.005407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.005770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:40.160 [2024-11-20 05:53:00.005793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.005804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:40.160 [2024-11-20 05:53:00.005813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.005822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:40.160 [2024-11-20 05:53:00.005831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.005843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:40.160 [2024-11-20 05:53:00.005851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:40.160 [2024-11-20 05:53:00.005861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7f50 is same with the state(6) to be set 00:41:40.160 [2024-11-20 05:53:00.006303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:41:40.160 [2024-11-20 05:53:00.006328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7f50 (9): Bad file descriptor 00:41:40.160 [2024-11-20 05:53:00.006411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:40.161 [2024-11-20 05:53:00.006427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f7f50 with addr=10.0.0.3, port=4420 00:41:40.161 [2024-11-20 05:53:00.006659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7f50 is same with the state(6) to be set 00:41:40.161 [2024-11-20 05:53:00.006678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7f50 (9): Bad file descriptor 00:41:40.161 [2024-11-20 05:53:00.006692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:41:40.161 [2024-11-20 05:53:00.006776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:41:40.161 [2024-11-20 05:53:00.006787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:41:40.161 [2024-11-20 05:53:00.006797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:41:40.161 [2024-11-20 05:53:00.006808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:41:40.161 05:53:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:41:41.098 5825.00 IOPS, 22.75 MiB/s [2024-11-20T05:53:01.017Z] [2024-11-20 05:53:01.006915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:41.098 [2024-11-20 05:53:01.006973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f7f50 with addr=10.0.0.3, port=4420 00:41:41.098 [2024-11-20 05:53:01.006987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7f50 is same with the state(6) to be set 00:41:41.098 [2024-11-20 05:53:01.007004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7f50 (9): Bad file descriptor 00:41:41.098 [2024-11-20 05:53:01.007020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:41:41.098 [2024-11-20 05:53:01.007029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:41:41.098 [2024-11-20 05:53:01.007040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:41:41.098 [2024-11-20 05:53:01.007050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:41:41.098 [2024-11-20 05:53:01.007061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:41:42.296 3883.33 IOPS, 15.17 MiB/s [2024-11-20T05:53:02.215Z] [2024-11-20 05:53:02.007160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:42.296 [2024-11-20 05:53:02.007217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f7f50 with addr=10.0.0.3, port=4420 00:41:42.296 [2024-11-20 05:53:02.007230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7f50 is same with the state(6) to be set 00:41:42.296 [2024-11-20 05:53:02.007248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7f50 (9): Bad file descriptor 00:41:42.296 [2024-11-20 05:53:02.007263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:41:42.296 [2024-11-20 05:53:02.007272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:41:42.296 [2024-11-20 05:53:02.007283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:41:42.296 [2024-11-20 05:53:02.007299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:41:42.296 [2024-11-20 05:53:02.007310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:41:43.302 2912.50 IOPS, 11.38 MiB/s [2024-11-20T05:53:03.221Z] [2024-11-20 05:53:03.009439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:43.302 [2024-11-20 05:53:03.009506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f7f50 with addr=10.0.0.3, port=4420 00:41:43.302 [2024-11-20 05:53:03.009520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7f50 is same with the state(6) to be set 00:41:43.302 [2024-11-20 05:53:03.009703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f7f50 (9): Bad file descriptor 00:41:43.302 [2024-11-20 05:53:03.010307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:41:43.302 [2024-11-20 05:53:03.010332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:41:43.302 [2024-11-20 05:53:03.010343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:41:43.302 [2024-11-20 05:53:03.010354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:41:43.302 [2024-11-20 05:53:03.010365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:41:43.302 05:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:41:43.562 [2024-11-20 05:53:03.255658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:41:43.562 05:53:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 97875 00:41:44.130 2330.00 IOPS, 9.10 MiB/s [2024-11-20T05:53:04.049Z] [2024-11-20 05:53:04.034258] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:41:46.005 3542.50 IOPS, 13.84 MiB/s [2024-11-20T05:53:07.302Z] 4669.00 IOPS, 18.24 MiB/s [2024-11-20T05:53:08.239Z] 5519.38 IOPS, 21.56 MiB/s [2024-11-20T05:53:09.175Z] 6174.78 IOPS, 24.12 MiB/s [2024-11-20T05:53:09.175Z] 6701.20 IOPS, 26.18 MiB/s 00:41:49.256 Latency(us) 00:41:49.256 [2024-11-20T05:53:09.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:49.256 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:41:49.256 Verification LBA range: start 0x0 length 0x4000 00:41:49.256 NVMe0n1 : 10.01 6709.06 26.21 5294.30 0.00 10639.63 647.56 3019898.88 00:41:49.256 [2024-11-20T05:53:09.175Z] =================================================================================================================== 00:41:49.256 [2024-11-20T05:53:09.175Z] Total : 6709.06 26.21 5294.30 0.00 10639.63 0.00 3019898.88 00:41:49.256 { 00:41:49.256 "results": [ 00:41:49.256 { 00:41:49.256 "job": "NVMe0n1", 00:41:49.256 "core_mask": "0x4", 00:41:49.256 "workload": "verify", 00:41:49.256 "status": "finished", 00:41:49.256 "verify_range": { 00:41:49.256 "start": 0, 00:41:49.256 "length": 16384 00:41:49.256 }, 00:41:49.256 "queue_depth": 128, 00:41:49.256 "io_size": 4096, 00:41:49.256 "runtime": 10.007357, 00:41:49.256 "iops": 6709.06414151109, 00:41:49.256 "mibps": 26.207281802777697, 00:41:49.256 "io_failed": 52982, 00:41:49.256 "io_timeout": 0, 00:41:49.256 "avg_latency_us": 10639.626042666147, 00:41:49.256 "min_latency_us": 647.5580952380952, 00:41:49.256 "max_latency_us": 3019898.88 00:41:49.256 } 00:41:49.256 ], 00:41:49.256 "core_count": 1 00:41:49.256 } 00:41:49.256 05:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 97712 00:41:49.256 05:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 97712 ']' 00:41:49.256 05:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 97712 00:41:49.256 05:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:41:49.257 05:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:49.257 05:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97712 00:41:49.257 killing process with pid 97712 00:41:49.257 Received shutdown signal, test time was about 10.000000 seconds 00:41:49.257 00:41:49.257 Latency(us) 00:41:49.257 [2024-11-20T05:53:09.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:49.257 [2024-11-20T05:53:09.176Z] =================================================================================================================== 00:41:49.257 [2024-11-20T05:53:09.176Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:49.257 05:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:41:49.257 05:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:41:49.257 05:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97712' 00:41:49.257 05:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 97712 00:41:49.257 05:53:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 97712 00:41:49.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:49.257 05:53:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97998 00:41:49.257 05:53:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:41:49.257 05:53:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97998 /var/tmp/bdevperf.sock 00:41:49.257 05:53:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 97998 ']' 00:41:49.257 05:53:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:49.257 05:53:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:49.257 05:53:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:49.257 05:53:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:49.257 05:53:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:41:49.516 [2024-11-20 05:53:09.191117] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:41:49.516 [2024-11-20 05:53:09.191489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97998 ] 00:41:49.516 [2024-11-20 05:53:09.342839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:49.516 [2024-11-20 05:53:09.390607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:50.453 05:53:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:50.453 05:53:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:41:50.453 05:53:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97998 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:41:50.453 05:53:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=98026 00:41:50.453 05:53:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:41:50.453 05:53:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:41:51.021 NVMe0n1 00:41:51.021 05:53:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:51.021 05:53:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=98084 00:41:51.021 05:53:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:41:51.021 Running I/O for 10 seconds... 00:41:51.970 05:53:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:41:52.232 20865.00 IOPS, 81.50 MiB/s [2024-11-20T05:53:12.151Z] [2024-11-20 05:53:12.004802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.004993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.005001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.005009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.005017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.005024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.005032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.005040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.005048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.005055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.005063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.232 [2024-11-20 05:53:12.005071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.233 [2024-11-20 05:53:12.005080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.233 [2024-11-20 05:53:12.005089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.233 [2024-11-20 05:53:12.005097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.233 [2024-11-20 05:53:12.005104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ae8f0 is same with the state(6) to be set 00:41:52.233 [2024-11-20 05:53:12.005565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.005602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.005623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.005633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.005645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.005654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.005666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.005676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.005687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.005696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.005707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.005716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.005866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.005877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.006160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.006172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.006183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.006297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.006312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.006321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.006331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.006342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.006643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.006666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.006677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.006686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.006697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.006706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.006717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.006725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.006735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.006744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.007025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.007043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.007054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.007063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.007074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.007082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.007174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.007183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.007193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.007201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.007213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.007566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.007580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.007589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.007599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.007609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.007619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.007628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.007638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.007647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.007657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.008027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.008045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.008054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.008065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.008075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.008085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.008094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.008104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.008113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.008124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.008132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.008142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.008273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.008284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.008293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.008304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.008312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.008322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.008331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.233 [2024-11-20 05:53:12.008341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.233 [2024-11-20 05:53:12.008771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.008784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.008793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.008804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.008813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.008823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.008832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.008842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.008850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.008860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.008869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.008880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.008889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.009036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.009047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.009058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.009256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.009267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.009276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.009286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.009296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.009307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.009316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.009327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.009335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.009617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.009628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.009639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.009648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.009743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.009753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.009763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.009772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.009783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.009791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.009802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.009810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.009956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.009968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.010105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.010241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.010254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.010378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.010391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.010400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.010411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.010512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.010530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.010540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.010677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.010694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.010825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.010842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.010853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.010988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.011001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.011010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.011117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.011127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.011138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.011147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.011252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.011261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.011273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.011282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.011299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.011401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.011417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.011427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.011438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.011458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.011603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.011737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.011755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.011765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.011874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.011885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.011896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.234 [2024-11-20 05:53:12.011906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.234 [2024-11-20 05:53:12.012056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.012067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.012290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.012300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.012311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.012320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.012332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.012341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.012350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.012359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.012369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.012378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.012388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.012778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.012796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.012806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.012818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.012828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.012838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.012847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.012857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.012866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.012877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.012990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.013002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.013012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.013270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.013280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.013291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.013300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.013311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.013320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.013330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.013339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.013349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.013607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.013623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.013632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.013720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.013730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.013740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.013750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.013760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.013769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.013779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.013897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.013910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.013919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.013930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.014063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.014142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.014152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.014162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.014171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.014183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.014192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.014202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.014211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.014476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.014493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.014505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.014514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.014610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.014623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.014634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.014645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.014655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.014665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.014809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.014820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.014831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.014963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.015045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.015055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.015065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.015074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.015085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.015093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.235 [2024-11-20 05:53:12.015105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.235 [2024-11-20 05:53:12.015114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.015407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.236 [2024-11-20 05:53:12.015423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.015434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.236 [2024-11-20 05:53:12.015459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.015471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.236 [2024-11-20 05:53:12.015480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.015491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.236 [2024-11-20 05:53:12.015500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.015510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.236 [2024-11-20 05:53:12.015519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.015529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.236 [2024-11-20 05:53:12.015652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.015669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.236 [2024-11-20 05:53:12.015679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.015800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.236 [2024-11-20 05:53:12.015811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.015822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.236 [2024-11-20 05:53:12.015831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.015842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.236 [2024-11-20 05:53:12.015850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.015861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:41:52.236 [2024-11-20 05:53:12.015869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.016250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17507b0 is same with the state(6) to be set 00:41:52.236 [2024-11-20 05:53:12.016263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:41:52.236 [2024-11-20 05:53:12.016271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:41:52.236 [2024-11-20 05:53:12.016279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44368 len:8 PRP1 0x0 PRP2 0x0 00:41:52.236 [2024-11-20 05:53:12.016288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.016685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:41:52.236 [2024-11-20 05:53:12.016709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.016720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:41:52.236 [2024-11-20 05:53:12.016730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.016739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:41:52.236 [2024-11-20 05:53:12.016748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.016758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:41:52.236 [2024-11-20 05:53:12.016767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:52.236 [2024-11-20 05:53:12.016776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e4f50 is same with the state(6) to be set 00:41:52.236 [2024-11-20 05:53:12.017159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:41:52.236 [2024-11-20 05:53:12.017186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e4f50 (9): Bad file descriptor 00:41:52.236 [2024-11-20 05:53:12.017372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:52.236 [2024-11-20 05:53:12.017477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e4f50 with addr=10.0.0.3, port=4420 00:41:52.236 [2024-11-20 05:53:12.017493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e4f50 is same with the state(6) to be set 00:41:52.236 [2024-11-20 05:53:12.017510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e4f50 (9): Bad file descriptor 00:41:52.236 [2024-11-20 05:53:12.017611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:41:52.236 [2024-11-20 05:53:12.017623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:41:52.236 [2024-11-20 05:53:12.017634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:41:52.236 [2024-11-20 05:53:12.017643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:41:52.236 [2024-11-20 05:53:12.017653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:41:52.236 05:53:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 98084 00:41:54.107 12385.50 IOPS, 48.38 MiB/s [2024-11-20T05:53:14.026Z] 8257.00 IOPS, 32.25 MiB/s [2024-11-20T05:53:14.026Z] [2024-11-20 05:53:14.017987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:54.107 [2024-11-20 05:53:14.018022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e4f50 with addr=10.0.0.3, port=4420 00:41:54.107 [2024-11-20 05:53:14.018034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e4f50 is same with the state(6) to be set 00:41:54.107 [2024-11-20 05:53:14.018050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e4f50 (9): Bad file descriptor 00:41:54.107 [2024-11-20 05:53:14.018064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:41:54.107 [2024-11-20 05:53:14.018073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:41:54.107 [2024-11-20 05:53:14.018083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:41:54.107 [2024-11-20 05:53:14.018092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:41:54.107 [2024-11-20 05:53:14.018101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:41:55.980 6192.75 IOPS, 24.19 MiB/s [2024-11-20T05:53:16.158Z] 4954.20 IOPS, 19.35 MiB/s [2024-11-20T05:53:16.158Z] [2024-11-20 05:53:16.018213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:56.239 [2024-11-20 05:53:16.018248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e4f50 with addr=10.0.0.3, port=4420 00:41:56.239 [2024-11-20 05:53:16.018259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e4f50 is same with the state(6) to be set 00:41:56.239 [2024-11-20 05:53:16.018275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e4f50 (9): Bad file descriptor 00:41:56.239 [2024-11-20 05:53:16.018288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:41:56.239 [2024-11-20 05:53:16.018297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:41:56.239 [2024-11-20 05:53:16.018307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:41:56.239 [2024-11-20 05:53:16.018315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:41:56.239 [2024-11-20 05:53:16.018325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:41:58.114 4128.50 IOPS, 16.13 MiB/s [2024-11-20T05:53:18.033Z] 3538.71 IOPS, 13.82 MiB/s [2024-11-20T05:53:18.033Z] [2024-11-20 05:53:18.018378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:41:58.114 [2024-11-20 05:53:18.018423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:41:58.114 [2024-11-20 05:53:18.018433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:41:58.114 [2024-11-20 05:53:18.018443] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:41:58.114 [2024-11-20 05:53:18.018461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:41:59.309 3096.38 IOPS, 12.10 MiB/s 00:41:59.309 Latency(us) 00:41:59.309 [2024-11-20T05:53:19.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:59.309 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:41:59.309 NVMe0n1 : 8.20 3021.10 11.80 15.61 0.00 42135.87 1864.66 7030452.42 00:41:59.309 [2024-11-20T05:53:19.228Z] =================================================================================================================== 00:41:59.309 [2024-11-20T05:53:19.228Z] Total : 3021.10 11.80 15.61 0.00 42135.87 1864.66 7030452.42 00:41:59.309 { 00:41:59.309 "results": [ 00:41:59.309 { 00:41:59.309 "job": "NVMe0n1", 00:41:59.309 "core_mask": "0x4", 00:41:59.309 "workload": "randread", 00:41:59.309 "status": "finished", 00:41:59.309 "queue_depth": 128, 00:41:59.309 "io_size": 4096, 00:41:59.309 "runtime": 8.199337, 00:41:59.309 "iops": 3021.097925356648, 00:41:59.309 "mibps": 11.801163770924406, 00:41:59.309 "io_failed": 128, 00:41:59.309 "io_timeout": 0, 00:41:59.309 "avg_latency_us": 42135.87135042716, 00:41:59.309 "min_latency_us": 1864.655238095238, 00:41:59.309 "max_latency_us": 7030452.419047619 00:41:59.309 } 00:41:59.309 ], 00:41:59.309 "core_count": 1 00:41:59.309 } 00:41:59.309 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:41:59.309 Attaching 5 probes... 00:41:59.309 1326.968759: reset bdev controller NVMe0 00:41:59.309 1327.033294: reconnect bdev controller NVMe0 00:41:59.309 3327.723187: reconnect delay bdev controller NVMe0 00:41:59.310 3327.737938: reconnect bdev controller NVMe0 00:41:59.310 5327.959556: reconnect delay bdev controller NVMe0 00:41:59.310 5327.972277: reconnect bdev controller NVMe0 00:41:59.310 7328.180709: reconnect delay bdev controller NVMe0 00:41:59.310 7328.193763: reconnect bdev controller NVMe0 00:41:59.310 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:41:59.310 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:41:59.310 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 98026 00:41:59.310 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:41:59.310 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97998 00:41:59.310 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 97998 ']' 00:41:59.310 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 97998 00:41:59.310 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:41:59.310 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:59.310 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97998 00:41:59.310 killing process with pid 97998 00:41:59.310 Received shutdown signal, test time was about 8.283816 seconds 00:41:59.310 00:41:59.310 Latency(us) 00:41:59.310 [2024-11-20T05:53:19.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:59.310 [2024-11-20T05:53:19.229Z] =================================================================================================================== 00:41:59.310 [2024-11-20T05:53:19.229Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:59.310 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:41:59.310 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:41:59.310 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97998' 00:41:59.310 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 97998 00:41:59.310 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 97998 00:41:59.568 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:59.568 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:41:59.568 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:41:59.568 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:59.568 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:59.828 rmmod nvme_tcp 00:41:59.828 rmmod nvme_fabrics 00:41:59.828 rmmod nvme_keyring 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 97428 ']' 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 97428 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 97428 ']' 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 97428 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97428 00:41:59.828 killing process with pid 97428 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97428' 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 97428 00:41:59.828 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 97428 00:42:00.086 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:00.086 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:00.086 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:00.086 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:42:00.086 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:00.086 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:42:00.086 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:42:00.086 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:00.086 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:42:00.086 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:42:00.086 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:42:00.086 05:53:19 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:42:00.345 00:42:00.345 real 0m47.049s 00:42:00.345 user 2m15.722s 00:42:00.345 sys 0m6.403s 00:42:00.345 ************************************ 00:42:00.345 END TEST nvmf_timeout 00:42:00.345 ************************************ 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:42:00.345 ************************************ 00:42:00.345 END TEST nvmf_host 00:42:00.345 ************************************ 00:42:00.345 00:42:00.345 real 5m36.654s 00:42:00.345 user 14m5.748s 00:42:00.345 sys 1m18.660s 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:00.345 05:53:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:42:00.604 05:53:20 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:42:00.604 05:53:20 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:42:00.604 05:53:20 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:42:00.604 05:53:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:42:00.604 05:53:20 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:00.604 05:53:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:00.604 ************************************ 00:42:00.604 START TEST nvmf_target_core_interrupt_mode 00:42:00.604 ************************************ 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:42:00.604 * Looking for test storage... 00:42:00.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:00.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.604 --rc genhtml_branch_coverage=1 00:42:00.604 --rc genhtml_function_coverage=1 00:42:00.604 --rc genhtml_legend=1 00:42:00.604 --rc geninfo_all_blocks=1 00:42:00.604 --rc geninfo_unexecuted_blocks=1 00:42:00.604 00:42:00.604 ' 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:00.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.604 --rc genhtml_branch_coverage=1 00:42:00.604 --rc genhtml_function_coverage=1 00:42:00.604 --rc genhtml_legend=1 00:42:00.604 --rc geninfo_all_blocks=1 00:42:00.604 --rc geninfo_unexecuted_blocks=1 00:42:00.604 00:42:00.604 ' 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:00.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.604 --rc genhtml_branch_coverage=1 00:42:00.604 --rc genhtml_function_coverage=1 00:42:00.604 --rc genhtml_legend=1 00:42:00.604 --rc geninfo_all_blocks=1 00:42:00.604 --rc geninfo_unexecuted_blocks=1 00:42:00.604 00:42:00.604 ' 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:00.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.604 --rc genhtml_branch_coverage=1 00:42:00.604 --rc genhtml_function_coverage=1 00:42:00.604 --rc genhtml_legend=1 00:42:00.604 --rc geninfo_all_blocks=1 00:42:00.604 --rc geninfo_unexecuted_blocks=1 00:42:00.604 00:42:00.604 ' 00:42:00.604 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:00.863 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:00.864 ************************************ 00:42:00.864 START TEST nvmf_abort 00:42:00.864 ************************************ 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:42:00.864 * Looking for test storage... 00:42:00.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:00.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.864 --rc genhtml_branch_coverage=1 00:42:00.864 --rc genhtml_function_coverage=1 00:42:00.864 --rc genhtml_legend=1 00:42:00.864 --rc geninfo_all_blocks=1 00:42:00.864 --rc geninfo_unexecuted_blocks=1 00:42:00.864 00:42:00.864 ' 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:00.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.864 --rc genhtml_branch_coverage=1 00:42:00.864 --rc genhtml_function_coverage=1 00:42:00.864 --rc genhtml_legend=1 00:42:00.864 --rc geninfo_all_blocks=1 00:42:00.864 --rc geninfo_unexecuted_blocks=1 00:42:00.864 00:42:00.864 ' 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:00.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.864 --rc genhtml_branch_coverage=1 00:42:00.864 --rc genhtml_function_coverage=1 00:42:00.864 --rc genhtml_legend=1 00:42:00.864 --rc geninfo_all_blocks=1 00:42:00.864 --rc geninfo_unexecuted_blocks=1 00:42:00.864 00:42:00.864 ' 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:00.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:00.864 --rc genhtml_branch_coverage=1 00:42:00.864 --rc genhtml_function_coverage=1 00:42:00.864 --rc genhtml_legend=1 00:42:00.864 --rc geninfo_all_blocks=1 00:42:00.864 --rc geninfo_unexecuted_blocks=1 00:42:00.864 00:42:00.864 ' 00:42:00.864 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.123 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:42:01.124 Cannot find device "nvmf_init_br" 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:42:01.124 Cannot find device "nvmf_init_br2" 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:42:01.124 Cannot find device "nvmf_tgt_br" 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:42:01.124 Cannot find device "nvmf_tgt_br2" 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:42:01.124 Cannot find device "nvmf_init_br" 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:42:01.124 Cannot find device "nvmf_init_br2" 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:42:01.124 Cannot find device "nvmf_tgt_br" 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:42:01.124 Cannot find device "nvmf_tgt_br2" 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:42:01.124 Cannot find device "nvmf_br" 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:42:01.124 Cannot find device "nvmf_init_if" 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:42:01.124 Cannot find device "nvmf_init_if2" 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:01.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:42:01.124 05:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:01.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:01.124 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:42:01.124 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:42:01.124 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:01.124 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:42:01.124 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:01.124 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:01.383 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:01.383 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:01.383 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:01.383 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:42:01.383 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:42:01.384 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:01.384 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:42:01.384 00:42:01.384 --- 10.0.0.3 ping statistics --- 00:42:01.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:01.384 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:42:01.384 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:01.384 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:42:01.384 00:42:01.384 --- 10.0.0.4 ping statistics --- 00:42:01.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:01.384 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:01.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:01.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:42:01.384 00:42:01.384 --- 10.0.0.1 ping statistics --- 00:42:01.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:01.384 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:42:01.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:01.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:42:01.384 00:42:01.384 --- 10.0.0.2 ping statistics --- 00:42:01.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:01.384 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:01.384 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=98496 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 98496 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 98496 ']' 00:42:01.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:01.642 05:53:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:01.642 [2024-11-20 05:53:21.400528] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:01.642 [2024-11-20 05:53:21.402188] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:42:01.642 [2024-11-20 05:53:21.402264] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:01.900 [2024-11-20 05:53:21.562100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:01.900 [2024-11-20 05:53:21.634630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:01.900 [2024-11-20 05:53:21.634895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:01.900 [2024-11-20 05:53:21.635137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:01.900 [2024-11-20 05:53:21.635299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:01.900 [2024-11-20 05:53:21.635355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:01.900 [2024-11-20 05:53:21.636862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:01.900 [2024-11-20 05:53:21.636967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:01.900 [2024-11-20 05:53:21.636971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:01.900 [2024-11-20 05:53:21.751555] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:01.900 [2024-11-20 05:53:21.752391] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:01.900 [2024-11-20 05:53:21.752505] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:01.900 [2024-11-20 05:53:21.753782] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:02.838 [2024-11-20 05:53:22.519246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:02.838 Malloc0 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:02.838 Delay0 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:02.838 [2024-11-20 05:53:22.611306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:02.838 05:53:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:42:03.097 [2024-11-20 05:53:22.805822] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:42:05.000 Initializing NVMe Controllers 00:42:05.000 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:42:05.000 controller IO queue size 128 less than required 00:42:05.000 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:42:05.000 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:42:05.000 Initialization complete. Launching workers. 00:42:05.000 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37750 00:42:05.000 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37807, failed to submit 66 00:42:05.000 success 37750, unsuccessful 57, failed 0 00:42:05.000 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:05.000 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:05.000 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:05.000 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:05.000 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:42:05.000 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:42:05.000 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:05.000 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:42:05.000 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:05.000 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:42:05.000 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:05.000 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:05.000 rmmod nvme_tcp 00:42:05.259 rmmod nvme_fabrics 00:42:05.259 rmmod nvme_keyring 00:42:05.259 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:05.259 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:42:05.259 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:42:05.259 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 98496 ']' 00:42:05.259 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 98496 00:42:05.259 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 98496 ']' 00:42:05.259 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 98496 00:42:05.259 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:42:05.259 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:05.259 05:53:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 98496 00:42:05.259 killing process with pid 98496 00:42:05.259 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:05.259 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:05.259 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 98496' 00:42:05.259 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 98496 00:42:05.259 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 98496 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:42:05.518 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:42:05.776 00:42:05.776 real 0m5.039s 00:42:05.776 user 0m9.244s 00:42:05.776 sys 0m1.944s 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:42:05.776 ************************************ 00:42:05.776 END TEST nvmf_abort 00:42:05.776 ************************************ 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:05.776 ************************************ 00:42:05.776 START TEST nvmf_ns_hotplug_stress 00:42:05.776 ************************************ 00:42:05.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:42:06.036 * Looking for test storage... 00:42:06.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:42:06.036 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:06.036 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:06.036 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:42:06.036 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:06.036 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:06.036 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:06.036 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:06.036 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:42:06.036 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:42:06.036 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:42:06.036 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:42:06.036 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:42:06.036 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:42:06.036 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:06.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.037 --rc genhtml_branch_coverage=1 00:42:06.037 --rc genhtml_function_coverage=1 00:42:06.037 --rc genhtml_legend=1 00:42:06.037 --rc geninfo_all_blocks=1 00:42:06.037 --rc geninfo_unexecuted_blocks=1 00:42:06.037 00:42:06.037 ' 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:06.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.037 --rc genhtml_branch_coverage=1 00:42:06.037 --rc genhtml_function_coverage=1 00:42:06.037 --rc genhtml_legend=1 00:42:06.037 --rc geninfo_all_blocks=1 00:42:06.037 --rc geninfo_unexecuted_blocks=1 00:42:06.037 00:42:06.037 ' 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:06.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.037 --rc genhtml_branch_coverage=1 00:42:06.037 --rc genhtml_function_coverage=1 00:42:06.037 --rc genhtml_legend=1 00:42:06.037 --rc geninfo_all_blocks=1 00:42:06.037 --rc geninfo_unexecuted_blocks=1 00:42:06.037 00:42:06.037 ' 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:06.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.037 --rc genhtml_branch_coverage=1 00:42:06.037 --rc genhtml_function_coverage=1 00:42:06.037 --rc genhtml_legend=1 00:42:06.037 --rc geninfo_all_blocks=1 00:42:06.037 --rc geninfo_unexecuted_blocks=1 00:42:06.037 00:42:06.037 ' 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:06.037 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:42:06.038 Cannot find device "nvmf_init_br" 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:42:06.038 Cannot find device "nvmf_init_br2" 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:42:06.038 Cannot find device "nvmf_tgt_br" 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:42:06.038 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:42:06.296 Cannot find device "nvmf_tgt_br2" 00:42:06.296 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:42:06.296 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:42:06.296 Cannot find device "nvmf_init_br" 00:42:06.296 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:42:06.296 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:42:06.296 Cannot find device "nvmf_init_br2" 00:42:06.296 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:42:06.296 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:42:06.296 Cannot find device "nvmf_tgt_br" 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:42:06.296 Cannot find device "nvmf_tgt_br2" 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:42:06.296 Cannot find device "nvmf_br" 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:42:06.296 Cannot find device "nvmf_init_if" 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:42:06.296 Cannot find device "nvmf_init_if2" 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:06.296 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:06.296 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:42:06.296 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:06.555 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:06.555 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:06.555 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:42:06.555 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:42:06.555 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:42:06.555 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:42:06.555 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:06.555 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:06.555 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:06.555 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:42:06.555 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:42:06.556 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:06.556 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:42:06.556 00:42:06.556 --- 10.0.0.3 ping statistics --- 00:42:06.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:06.556 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:42:06.556 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:06.556 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:42:06.556 00:42:06.556 --- 10.0.0.4 ping statistics --- 00:42:06.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:06.556 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:06.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:06.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:42:06.556 00:42:06.556 --- 10.0.0.1 ping statistics --- 00:42:06.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:06.556 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:42:06.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:06.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:42:06.556 00:42:06.556 --- 10.0.0.2 ping statistics --- 00:42:06.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:06.556 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=98819 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 98819 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 98819 ']' 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:06.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:06.556 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:42:06.556 [2024-11-20 05:53:26.449900] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:06.556 [2024-11-20 05:53:26.451434] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:42:06.556 [2024-11-20 05:53:26.451533] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:06.815 [2024-11-20 05:53:26.613453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:06.815 [2024-11-20 05:53:26.719377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:06.815 [2024-11-20 05:53:26.719429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:06.815 [2024-11-20 05:53:26.719462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:06.815 [2024-11-20 05:53:26.719476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:06.815 [2024-11-20 05:53:26.719487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:06.815 [2024-11-20 05:53:26.721104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:06.815 [2024-11-20 05:53:26.721288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:06.815 [2024-11-20 05:53:26.721294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:07.074 [2024-11-20 05:53:26.877312] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:07.074 [2024-11-20 05:53:26.878670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:07.074 [2024-11-20 05:53:26.878672] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:07.074 [2024-11-20 05:53:26.878801] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:07.654 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:07.654 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:42:07.654 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:07.654 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:07.654 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:42:07.654 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:07.654 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:42:07.654 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:07.960 [2024-11-20 05:53:27.746514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:07.960 05:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:08.226 05:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:42:08.485 [2024-11-20 05:53:28.259087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:08.485 05:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:42:08.744 05:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:42:09.003 Malloc0 00:42:09.004 05:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:09.263 Delay0 00:42:09.263 05:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:09.263 05:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:42:09.521 NULL1 00:42:09.521 05:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:42:10.087 05:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=98950 00:42:10.087 05:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:42:10.087 05:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:10.087 05:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:11.022 Read completed with error (sct=0, sc=11) 00:42:11.280 05:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:11.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:11.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:11.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:11.538 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:11.538 05:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:42:11.538 05:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:42:11.796 true 00:42:11.796 05:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:11.796 05:53:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:12.363 05:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:12.621 05:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:42:12.621 05:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:42:12.880 true 00:42:12.880 05:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:12.880 05:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:13.138 05:53:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:13.397 05:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:42:13.397 05:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:42:13.397 true 00:42:13.397 05:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:13.397 05:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:14.774 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:14.774 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:42:14.774 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:42:14.774 true 00:42:14.774 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:14.774 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:15.341 05:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:15.341 05:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:42:15.341 05:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:42:15.599 true 00:42:15.599 05:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:15.599 05:53:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:16.534 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:16.793 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:42:16.793 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:42:17.051 true 00:42:17.051 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:17.051 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:17.310 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:17.568 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:42:17.568 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:42:17.826 true 00:42:17.826 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:17.826 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:18.762 05:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:18.762 05:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:42:18.762 05:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:42:19.021 true 00:42:19.021 05:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:19.021 05:53:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:19.279 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:19.543 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:42:19.543 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:42:19.806 true 00:42:19.806 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:19.806 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:20.742 05:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:20.742 05:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:42:20.742 05:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:42:21.001 true 00:42:21.001 05:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:21.001 05:53:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:21.259 05:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:21.517 05:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:42:21.517 05:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:42:21.775 true 00:42:21.775 05:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:21.775 05:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:22.710 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:22.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:22.968 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:42:22.968 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:42:22.968 true 00:42:22.968 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:22.968 05:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:23.226 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:23.484 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:42:23.484 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:42:23.742 true 00:42:23.742 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:23.742 05:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:24.676 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:24.934 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:42:24.934 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:42:25.192 true 00:42:25.192 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:25.192 05:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:25.450 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:25.708 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:42:25.708 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:42:25.708 true 00:42:25.708 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:25.708 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:26.642 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:26.901 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:42:26.901 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:42:27.158 true 00:42:27.158 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:27.158 05:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:27.416 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:27.675 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:42:27.675 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:42:27.675 true 00:42:27.675 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:27.675 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:28.609 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:28.867 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:42:28.867 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:42:29.125 true 00:42:29.125 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:29.125 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:29.384 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:29.641 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:42:29.641 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:42:29.899 true 00:42:29.899 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:29.899 05:53:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:30.835 05:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:30.835 05:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:42:30.835 05:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:42:31.095 true 00:42:31.095 05:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:31.095 05:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:31.354 05:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:31.612 05:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:42:31.612 05:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:42:31.612 true 00:42:31.870 05:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:31.870 05:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:32.806 05:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:32.806 05:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:42:32.806 05:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:42:33.064 true 00:42:33.064 05:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:33.064 05:53:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:33.323 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:33.581 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:42:33.581 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:42:33.581 true 00:42:33.581 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:33.581 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:34.956 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:34.956 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:42:34.956 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:42:34.956 true 00:42:35.214 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:35.214 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:35.214 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:35.472 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:42:35.472 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:42:35.731 true 00:42:35.731 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:35.731 05:53:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:36.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:36.666 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:36.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:36.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:36.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:36.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:42:36.925 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:42:36.925 05:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:42:37.192 true 00:42:37.469 05:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:37.469 05:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:38.052 05:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:38.312 05:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:42:38.312 05:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:42:38.571 true 00:42:38.571 05:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:38.571 05:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:38.829 05:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:39.087 05:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:42:39.087 05:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:42:39.087 true 00:42:39.345 05:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:39.345 05:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:40.281 05:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:40.281 Initializing NVMe Controllers 00:42:40.281 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:42:40.281 Controller IO queue size 128, less than required. 00:42:40.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:40.281 Controller IO queue size 128, less than required. 00:42:40.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:40.281 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:40.281 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:42:40.281 Initialization complete. Launching workers. 00:42:40.281 ======================================================== 00:42:40.281 Latency(us) 00:42:40.281 Device Information : IOPS MiB/s Average min max 00:42:40.281 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 422.68 0.21 177179.23 2662.82 1037704.58 00:42:40.281 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15333.82 7.49 8347.02 1647.31 432762.94 00:42:40.281 ======================================================== 00:42:40.281 Total : 15756.50 7.69 12876.07 1647.31 1037704.58 00:42:40.281 00:42:40.281 05:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:42:40.281 05:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:42:40.538 true 00:42:40.538 05:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98950 00:42:40.538 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (98950) - No such process 00:42:40.538 05:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 98950 00:42:40.538 05:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:40.797 05:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:40.797 05:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:42:40.797 05:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:42:40.797 05:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:42:40.797 05:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:40.797 05:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:42:41.057 null0 00:42:41.057 05:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:41.057 05:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:41.057 05:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:42:41.314 null1 00:42:41.314 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:41.314 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:41.314 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:42:41.572 null2 00:42:41.572 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:41.572 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:41.572 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:42:41.830 null3 00:42:41.830 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:41.830 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:41.830 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:42:42.089 null4 00:42:42.089 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:42.089 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:42.089 05:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:42:42.089 null5 00:42:42.348 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:42.348 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:42.348 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:42:42.348 null6 00:42:42.348 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:42.348 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:42.348 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:42:42.608 null7 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 99963 99965 99967 99968 99970 99972 99974 99975 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:42.608 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:42.867 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:42.867 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:42.868 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:42.868 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:42.868 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:42.868 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:42.868 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:42.868 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:43.126 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.127 05:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:43.127 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.127 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.127 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.386 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:43.645 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.645 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.645 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:43.645 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.645 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.645 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:43.645 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.645 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.645 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:43.646 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.646 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.646 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:43.646 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:43.646 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:43.646 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:43.904 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:43.904 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.904 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.904 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:43.904 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:43.904 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:43.904 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.904 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.904 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:43.904 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:43.904 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.904 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.904 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:43.905 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:43.905 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:43.905 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:44.164 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.164 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.164 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:44.164 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.164 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.164 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:44.164 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:44.164 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.164 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.164 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:44.164 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:44.164 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.164 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.164 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:44.164 05:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:44.164 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:44.164 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:44.164 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.423 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.682 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:44.940 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:44.940 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.940 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.940 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:44.941 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:45.199 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:45.199 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.199 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.199 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:45.199 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.199 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.199 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:45.199 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:45.199 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:45.199 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:45.199 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:45.199 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.199 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.199 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:45.199 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:45.199 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:45.458 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:45.716 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.716 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.716 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:45.716 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:45.716 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:45.716 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:45.716 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:45.716 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:45.716 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.716 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.716 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:45.716 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.716 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.716 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:45.716 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:45.993 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.993 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.993 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:45.993 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.993 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.993 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:45.993 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.993 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.993 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:45.994 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.994 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.994 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:45.994 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:45.994 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:45.994 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.994 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.994 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:45.994 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:45.994 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:45.994 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:46.257 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:46.257 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:46.257 05:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:46.257 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:46.257 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:46.257 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:46.257 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:46.257 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:46.257 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:46.257 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:46.258 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:46.258 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:46.258 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:46.258 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:46.258 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:46.516 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:46.774 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:47.033 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:47.033 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.033 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:47.033 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:47.033 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:47.033 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:47.033 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.033 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:47.033 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:47.033 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:47.033 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:47.033 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.033 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:47.033 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:47.291 05:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:47.291 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:47.292 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.292 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:47.292 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:47.292 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:47.292 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.292 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:47.292 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.292 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:47.292 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.292 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:47.292 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:47.292 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.292 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:47.292 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.550 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:47.550 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:47.550 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.550 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:47.550 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.550 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:47.550 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:47.550 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:42:47.550 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:42:47.550 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:47.550 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:42:47.550 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:47.550 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:42:47.550 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:47.550 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:47.810 rmmod nvme_tcp 00:42:47.810 rmmod nvme_fabrics 00:42:47.810 rmmod nvme_keyring 00:42:47.810 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:47.810 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:42:47.810 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:42:47.810 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 98819 ']' 00:42:47.810 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 98819 00:42:47.810 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 98819 ']' 00:42:47.810 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 98819 00:42:47.810 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:42:47.810 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:47.810 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 98819 00:42:47.810 killing process with pid 98819 00:42:47.810 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:42:47.810 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:42:47.810 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 98819' 00:42:47.810 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 98819 00:42:47.810 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 98819 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:42:48.069 05:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:42:48.327 00:42:48.327 real 0m42.482s 00:42:48.327 user 2m58.212s 00:42:48.327 sys 0m19.983s 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:48.327 ************************************ 00:42:48.327 END TEST nvmf_ns_hotplug_stress 00:42:48.327 ************************************ 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:48.327 ************************************ 00:42:48.327 START TEST nvmf_delete_subsystem 00:42:48.327 ************************************ 00:42:48.327 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:42:48.587 * Looking for test storage... 00:42:48.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:48.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.587 --rc genhtml_branch_coverage=1 00:42:48.587 --rc genhtml_function_coverage=1 00:42:48.587 --rc genhtml_legend=1 00:42:48.587 --rc geninfo_all_blocks=1 00:42:48.587 --rc geninfo_unexecuted_blocks=1 00:42:48.587 00:42:48.587 ' 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:48.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.587 --rc genhtml_branch_coverage=1 00:42:48.587 --rc genhtml_function_coverage=1 00:42:48.587 --rc genhtml_legend=1 00:42:48.587 --rc geninfo_all_blocks=1 00:42:48.587 --rc geninfo_unexecuted_blocks=1 00:42:48.587 00:42:48.587 ' 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:48.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.587 --rc genhtml_branch_coverage=1 00:42:48.587 --rc genhtml_function_coverage=1 00:42:48.587 --rc genhtml_legend=1 00:42:48.587 --rc geninfo_all_blocks=1 00:42:48.587 --rc geninfo_unexecuted_blocks=1 00:42:48.587 00:42:48.587 ' 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:48.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:48.587 --rc genhtml_branch_coverage=1 00:42:48.587 --rc genhtml_function_coverage=1 00:42:48.587 --rc genhtml_legend=1 00:42:48.587 --rc geninfo_all_blocks=1 00:42:48.587 --rc geninfo_unexecuted_blocks=1 00:42:48.587 00:42:48.587 ' 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:42:48.587 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:42:48.588 Cannot find device "nvmf_init_br" 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:42:48.588 Cannot find device "nvmf_init_br2" 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:42:48.588 Cannot find device "nvmf_tgt_br" 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:42:48.588 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:42:48.847 Cannot find device "nvmf_tgt_br2" 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:42:48.848 Cannot find device "nvmf_init_br" 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:42:48.848 Cannot find device "nvmf_init_br2" 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:42:48.848 Cannot find device "nvmf_tgt_br" 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:42:48.848 Cannot find device "nvmf_tgt_br2" 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:42:48.848 Cannot find device "nvmf_br" 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:42:48.848 Cannot find device "nvmf_init_if" 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:42:48.848 Cannot find device "nvmf_init_if2" 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:48.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:48.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:48.848 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:42:49.107 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:49.107 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:42:49.107 00:42:49.107 --- 10.0.0.3 ping statistics --- 00:42:49.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:49.107 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:42:49.107 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:49.107 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:42:49.107 00:42:49.107 --- 10.0.0.4 ping statistics --- 00:42:49.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:49.107 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:42:49.107 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:49.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:49.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:42:49.107 00:42:49.107 --- 10.0.0.1 ping statistics --- 00:42:49.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:49.108 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:42:49.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:49.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:42:49.108 00:42:49.108 --- 10.0.0.2 ping statistics --- 00:42:49.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:49.108 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=101376 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 101376 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 101376 ']' 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:49.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:49.108 05:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:49.108 [2024-11-20 05:54:09.005804] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:49.108 [2024-11-20 05:54:09.007208] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:42:49.108 [2024-11-20 05:54:09.007290] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:49.367 [2024-11-20 05:54:09.166435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:49.367 [2024-11-20 05:54:09.249471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:49.367 [2024-11-20 05:54:09.249541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:49.367 [2024-11-20 05:54:09.249557] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:49.367 [2024-11-20 05:54:09.249571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:49.367 [2024-11-20 05:54:09.249582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:49.367 [2024-11-20 05:54:09.251323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:49.367 [2024-11-20 05:54:09.251314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:49.626 [2024-11-20 05:54:09.440635] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:49.626 [2024-11-20 05:54:09.441624] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:49.626 [2024-11-20 05:54:09.441732] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:50.194 [2024-11-20 05:54:09.968638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:50.194 05:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:50.194 [2024-11-20 05:54:10.001014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:50.194 NULL1 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:50.194 Delay0 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=101427 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:42:50.194 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:42:50.453 [2024-11-20 05:54:10.229849] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:42:52.358 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:52.358 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:52.358 05:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:52.358 Read completed with error (sct=0, sc=8) 00:42:52.358 Read completed with error (sct=0, sc=8) 00:42:52.358 Read completed with error (sct=0, sc=8) 00:42:52.358 Read completed with error (sct=0, sc=8) 00:42:52.358 starting I/O failed: -6 00:42:52.358 Write completed with error (sct=0, sc=8) 00:42:52.358 Write completed with error (sct=0, sc=8) 00:42:52.358 Read completed with error (sct=0, sc=8) 00:42:52.358 Write completed with error (sct=0, sc=8) 00:42:52.358 starting I/O failed: -6 00:42:52.358 Write completed with error (sct=0, sc=8) 00:42:52.358 Read completed with error (sct=0, sc=8) 00:42:52.358 Read completed with error (sct=0, sc=8) 00:42:52.358 Read completed with error (sct=0, sc=8) 00:42:52.358 starting I/O failed: -6 00:42:52.358 Read completed with error (sct=0, sc=8) 00:42:52.358 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 [2024-11-20 05:54:12.270587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f091c00d4d0 is same with the state(6) to be set 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 starting I/O failed: -6 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 [2024-11-20 05:54:12.272270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215ba50 is same with the state(6) to be set 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Write completed with error (sct=0, sc=8) 00:42:52.359 Read completed with error (sct=0, sc=8) 00:42:53.737 [2024-11-20 05:54:13.243531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157ee0 is same with the state(6) to be set 00:42:53.737 Write completed with error (sct=0, sc=8) 00:42:53.737 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 [2024-11-20 05:54:13.260023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215bc30 is same with the state(6) to be set 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 [2024-11-20 05:54:13.267408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f091c00d020 is same with the state(6) to be set 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 [2024-11-20 05:54:13.267585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f091c00d800 is same with the state(6) to be set 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 Read completed with error (sct=0, sc=8) 00:42:53.738 Write completed with error (sct=0, sc=8) 00:42:53.738 [2024-11-20 05:54:13.267796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f091c000c40 is same with the state(6) to be set 00:42:53.738 Initializing NVMe Controllers 00:42:53.738 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:42:53.738 Controller IO queue size 128, less than required. 00:42:53.738 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:53.738 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:53.738 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:53.738 Initialization complete. Launching workers. 00:42:53.738 ======================================================== 00:42:53.738 Latency(us) 00:42:53.738 Device Information : IOPS MiB/s Average min max 00:42:53.738 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 149.50 0.07 902318.03 805.20 1016558.21 00:42:53.738 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.87 0.08 994368.27 3806.93 2000858.45 00:42:53.738 ======================================================== 00:42:53.738 Total : 311.37 0.15 950172.45 805.20 2000858.45 00:42:53.738 00:42:53.738 [2024-11-20 05:54:13.268480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2157ee0 (9): Bad file descriptor 00:42:53.738 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:42:53.738 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:53.738 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:42:53.738 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 101427 00:42:53.738 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 101427 00:42:53.998 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (101427) - No such process 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 101427 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 101427 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 101427 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:53.998 [2024-11-20 05:54:13.796781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=101467 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101467 00:42:53.998 05:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:42:54.257 [2024-11-20 05:54:13.978674] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:42:54.516 05:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:42:54.516 05:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101467 00:42:54.516 05:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:42:55.083 05:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:42:55.083 05:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101467 00:42:55.083 05:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:42:55.651 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:42:55.651 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101467 00:42:55.651 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:42:56.219 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:42:56.219 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101467 00:42:56.219 05:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:42:56.478 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:42:56.478 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101467 00:42:56.478 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:42:57.046 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:42:57.046 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101467 00:42:57.046 05:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:42:57.306 Initializing NVMe Controllers 00:42:57.306 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:42:57.306 Controller IO queue size 128, less than required. 00:42:57.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:57.306 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:57.306 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:57.306 Initialization complete. Launching workers. 00:42:57.306 ======================================================== 00:42:57.306 Latency(us) 00:42:57.306 Device Information : IOPS MiB/s Average min max 00:42:57.306 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004325.74 1000198.84 1017639.07 00:42:57.306 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006491.52 1000217.09 1041840.49 00:42:57.306 ======================================================== 00:42:57.306 Total : 256.00 0.12 1005408.63 1000198.84 1041840.49 00:42:57.306 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101467 00:42:57.565 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (101467) - No such process 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 101467 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:57.565 rmmod nvme_tcp 00:42:57.565 rmmod nvme_fabrics 00:42:57.565 rmmod nvme_keyring 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 101376 ']' 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 101376 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 101376 ']' 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 101376 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:57.565 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 101376 00:42:57.824 killing process with pid 101376 00:42:57.824 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:57.824 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:57.824 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 101376' 00:42:57.824 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 101376 00:42:57.824 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 101376 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:42:58.083 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:42:58.343 00:42:58.343 real 0m9.872s 00:42:58.343 user 0m24.731s 00:42:58.343 sys 0m2.520s 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:58.343 ************************************ 00:42:58.343 END TEST nvmf_delete_subsystem 00:42:58.343 ************************************ 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:58.343 ************************************ 00:42:58.343 START TEST nvmf_host_management 00:42:58.343 ************************************ 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:42:58.343 * Looking for test storage... 00:42:58.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:42:58.343 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:58.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.603 --rc genhtml_branch_coverage=1 00:42:58.603 --rc genhtml_function_coverage=1 00:42:58.603 --rc genhtml_legend=1 00:42:58.603 --rc geninfo_all_blocks=1 00:42:58.603 --rc geninfo_unexecuted_blocks=1 00:42:58.603 00:42:58.603 ' 00:42:58.603 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:58.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.604 --rc genhtml_branch_coverage=1 00:42:58.604 --rc genhtml_function_coverage=1 00:42:58.604 --rc genhtml_legend=1 00:42:58.604 --rc geninfo_all_blocks=1 00:42:58.604 --rc geninfo_unexecuted_blocks=1 00:42:58.604 00:42:58.604 ' 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:58.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.604 --rc genhtml_branch_coverage=1 00:42:58.604 --rc genhtml_function_coverage=1 00:42:58.604 --rc genhtml_legend=1 00:42:58.604 --rc geninfo_all_blocks=1 00:42:58.604 --rc geninfo_unexecuted_blocks=1 00:42:58.604 00:42:58.604 ' 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:58.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:58.604 --rc genhtml_branch_coverage=1 00:42:58.604 --rc genhtml_function_coverage=1 00:42:58.604 --rc genhtml_legend=1 00:42:58.604 --rc geninfo_all_blocks=1 00:42:58.604 --rc geninfo_unexecuted_blocks=1 00:42:58.604 00:42:58.604 ' 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:58.604 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:42:58.605 Cannot find device "nvmf_init_br" 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:42:58.605 Cannot find device "nvmf_init_br2" 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:42:58.605 Cannot find device "nvmf_tgt_br" 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:42:58.605 Cannot find device "nvmf_tgt_br2" 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:42:58.605 Cannot find device "nvmf_init_br" 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:42:58.605 Cannot find device "nvmf_init_br2" 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:42:58.605 Cannot find device "nvmf_tgt_br" 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:42:58.605 Cannot find device "nvmf_tgt_br2" 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:42:58.605 Cannot find device "nvmf_br" 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:42:58.605 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:42:58.865 Cannot find device "nvmf_init_if" 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:42:58.865 Cannot find device "nvmf_init_if2" 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:42:58.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:42:58.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:42:58.865 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:42:59.148 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:42:59.148 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.313 ms 00:42:59.149 00:42:59.149 --- 10.0.0.3 ping statistics --- 00:42:59.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:59.149 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:42:59.149 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:42:59.149 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:42:59.149 00:42:59.149 --- 10.0.0.4 ping statistics --- 00:42:59.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:59.149 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:42:59.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:59.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:42:59.149 00:42:59.149 --- 10.0.0.1 ping statistics --- 00:42:59.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:59.149 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:42:59.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:59.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:42:59.149 00:42:59.149 --- 10.0.0.2 ping statistics --- 00:42:59.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:59.149 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=101756 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 101756 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 101756 ']' 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:59.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:59.149 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:42:59.149 [2024-11-20 05:54:18.894349] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:59.149 [2024-11-20 05:54:18.895754] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:42:59.149 [2024-11-20 05:54:18.895821] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:59.471 [2024-11-20 05:54:19.057306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:59.471 [2024-11-20 05:54:19.126764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:59.471 [2024-11-20 05:54:19.126836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:59.471 [2024-11-20 05:54:19.126854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:59.471 [2024-11-20 05:54:19.126869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:59.471 [2024-11-20 05:54:19.126882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:59.471 [2024-11-20 05:54:19.128196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:59.471 [2024-11-20 05:54:19.128389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:59.471 [2024-11-20 05:54:19.128862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:42:59.472 [2024-11-20 05:54:19.128870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:59.472 [2024-11-20 05:54:19.223187] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:59.472 [2024-11-20 05:54:19.224769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:59.472 [2024-11-20 05:54:19.224829] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:59.472 [2024-11-20 05:54:19.225648] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:59.472 [2024-11-20 05:54:19.225657] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:00.039 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:00.039 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:43:00.039 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:00.039 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:00.039 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:00.299 05:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:00.299 [2024-11-20 05:54:20.013982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:00.299 Malloc0 00:43:00.299 [2024-11-20 05:54:20.118255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=101828 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 101828 /var/tmp/bdevperf.sock 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 101828 ']' 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:43:00.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:00.299 { 00:43:00.299 "params": { 00:43:00.299 "name": "Nvme$subsystem", 00:43:00.299 "trtype": "$TEST_TRANSPORT", 00:43:00.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:00.299 "adrfam": "ipv4", 00:43:00.299 "trsvcid": "$NVMF_PORT", 00:43:00.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:00.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:00.299 "hdgst": ${hdgst:-false}, 00:43:00.299 "ddgst": ${ddgst:-false} 00:43:00.299 }, 00:43:00.299 "method": "bdev_nvme_attach_controller" 00:43:00.299 } 00:43:00.299 EOF 00:43:00.299 )") 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:43:00.299 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:00.299 "params": { 00:43:00.299 "name": "Nvme0", 00:43:00.299 "trtype": "tcp", 00:43:00.299 "traddr": "10.0.0.3", 00:43:00.299 "adrfam": "ipv4", 00:43:00.299 "trsvcid": "4420", 00:43:00.299 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:00.299 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:00.299 "hdgst": false, 00:43:00.299 "ddgst": false 00:43:00.299 }, 00:43:00.299 "method": "bdev_nvme_attach_controller" 00:43:00.299 }' 00:43:00.559 [2024-11-20 05:54:20.235383] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:43:00.559 [2024-11-20 05:54:20.235488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101828 ] 00:43:00.559 [2024-11-20 05:54:20.395325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:00.559 [2024-11-20 05:54:20.452513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:00.818 Running I/O for 10 seconds... 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=994 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 994 -ge 100 ']' 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:01.387 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:01.387 [2024-11-20 05:54:21.221875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.221925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.221936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.221945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.221955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.221963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.221971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.221979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.221987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.221996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.222004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.222012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.222020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.222030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.222039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.222047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.387 [2024-11-20 05:54:21.222055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.222191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21069f0 is same with the state(6) to be set 00:43:01.388 [2024-11-20 05:54:21.226497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.226985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.226994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.227004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.227013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.227023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.227032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:01.388 [2024-11-20 05:54:21.227042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.227052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.227062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.227071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.227081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.227090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.227100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.227108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.388 [2024-11-20 05:54:21.227118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.388 [2024-11-20 05:54:21.227127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:43:01.389 [2024-11-20 05:54:21.227234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:01.389 [2024-11-20 05:54:21.227427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:01.389 [2024-11-20 05:54:21.227564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:01.389 [2024-11-20 05:54:21.227799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:01.389 [2024-11-20 05:54:21.227828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:43:01.389 [2024-11-20 05:54:21.228812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:43:01.389 task offset: 8192 on job bdev=Nvme0n1 fails 00:43:01.389 00:43:01.389 Latency(us) 00:43:01.389 [2024-11-20T05:54:21.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:01.389 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:43:01.389 Job: Nvme0n1 ended in about 0.60 seconds with error 00:43:01.389 Verification LBA range: start 0x0 length 0x400 00:43:01.389 Nvme0n1 : 0.60 1802.68 112.67 106.04 0.00 32566.41 1989.49 37199.48 00:43:01.389 [2024-11-20T05:54:21.308Z] =================================================================================================================== 00:43:01.389 [2024-11-20T05:54:21.308Z] Total : 1802.68 112.67 106.04 0.00 32566.41 1989.49 37199.48 00:43:01.390 [2024-11-20 05:54:21.230421] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:01.390 [2024-11-20 05:54:21.230452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205c660 (9): Bad file descriptor 00:43:01.390 [2024-11-20 05:54:21.233178] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:43:01.390 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:01.390 05:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:43:02.327 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 101828 00:43:02.327 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (101828) - No such process 00:43:02.327 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:43:02.328 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:43:02.328 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:43:02.328 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:43:02.328 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:43:02.328 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:43:02.587 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:02.587 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:02.587 { 00:43:02.587 "params": { 00:43:02.587 "name": "Nvme$subsystem", 00:43:02.587 "trtype": "$TEST_TRANSPORT", 00:43:02.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:02.587 "adrfam": "ipv4", 00:43:02.587 "trsvcid": "$NVMF_PORT", 00:43:02.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:02.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:02.587 "hdgst": ${hdgst:-false}, 00:43:02.587 "ddgst": ${ddgst:-false} 00:43:02.587 }, 00:43:02.587 "method": "bdev_nvme_attach_controller" 00:43:02.587 } 00:43:02.587 EOF 00:43:02.587 )") 00:43:02.587 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:43:02.587 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:43:02.587 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:43:02.587 05:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:02.587 "params": { 00:43:02.587 "name": "Nvme0", 00:43:02.587 "trtype": "tcp", 00:43:02.587 "traddr": "10.0.0.3", 00:43:02.587 "adrfam": "ipv4", 00:43:02.587 "trsvcid": "4420", 00:43:02.587 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:02.587 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:02.587 "hdgst": false, 00:43:02.587 "ddgst": false 00:43:02.587 }, 00:43:02.587 "method": "bdev_nvme_attach_controller" 00:43:02.587 }' 00:43:02.587 [2024-11-20 05:54:22.298475] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:43:02.587 [2024-11-20 05:54:22.298573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101880 ] 00:43:02.587 [2024-11-20 05:54:22.457051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:02.847 [2024-11-20 05:54:22.537103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.847 Running I/O for 1 seconds... 00:43:04.225 1984.00 IOPS, 124.00 MiB/s 00:43:04.225 Latency(us) 00:43:04.225 [2024-11-20T05:54:24.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:04.225 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:43:04.225 Verification LBA range: start 0x0 length 0x400 00:43:04.225 Nvme0n1 : 1.01 2032.21 127.01 0.00 0.00 30993.61 4930.80 28211.69 00:43:04.225 [2024-11-20T05:54:24.144Z] =================================================================================================================== 00:43:04.225 [2024-11-20T05:54:24.144Z] Total : 2032.21 127.01 0.00 0.00 30993.61 4930.80 28211.69 00:43:04.225 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:43:04.225 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:43:04.225 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:43:04.225 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:43:04.225 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:43:04.225 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:04.225 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:43:04.225 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:04.225 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:43:04.225 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:04.225 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:04.225 rmmod nvme_tcp 00:43:04.484 rmmod nvme_fabrics 00:43:04.484 rmmod nvme_keyring 00:43:04.484 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:04.484 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:43:04.484 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:43:04.484 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 101756 ']' 00:43:04.484 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 101756 00:43:04.484 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 101756 ']' 00:43:04.484 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 101756 00:43:04.484 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:43:04.484 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:04.484 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 101756 00:43:04.484 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:43:04.484 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:43:04.484 killing process with pid 101756 00:43:04.484 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 101756' 00:43:04.484 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 101756 00:43:04.484 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 101756 00:43:04.484 [2024-11-20 05:54:24.385388] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:04.742 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:43:04.743 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:04.743 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:04.743 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:05.002 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:43:05.002 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:43:05.002 00:43:05.002 real 0m6.550s 00:43:05.002 user 0m19.012s 00:43:05.002 sys 0m2.773s 00:43:05.002 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:05.002 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:05.002 ************************************ 00:43:05.002 END TEST nvmf_host_management 00:43:05.002 ************************************ 00:43:05.002 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:43:05.002 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:43:05.002 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:05.002 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:05.002 ************************************ 00:43:05.002 START TEST nvmf_lvol 00:43:05.002 ************************************ 00:43:05.002 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:43:05.002 * Looking for test storage... 00:43:05.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:05.002 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:05.002 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:05.002 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:05.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.264 --rc genhtml_branch_coverage=1 00:43:05.264 --rc genhtml_function_coverage=1 00:43:05.264 --rc genhtml_legend=1 00:43:05.264 --rc geninfo_all_blocks=1 00:43:05.264 --rc geninfo_unexecuted_blocks=1 00:43:05.264 00:43:05.264 ' 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:05.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.264 --rc genhtml_branch_coverage=1 00:43:05.264 --rc genhtml_function_coverage=1 00:43:05.264 --rc genhtml_legend=1 00:43:05.264 --rc geninfo_all_blocks=1 00:43:05.264 --rc geninfo_unexecuted_blocks=1 00:43:05.264 00:43:05.264 ' 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:05.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.264 --rc genhtml_branch_coverage=1 00:43:05.264 --rc genhtml_function_coverage=1 00:43:05.264 --rc genhtml_legend=1 00:43:05.264 --rc geninfo_all_blocks=1 00:43:05.264 --rc geninfo_unexecuted_blocks=1 00:43:05.264 00:43:05.264 ' 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:05.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.264 --rc genhtml_branch_coverage=1 00:43:05.264 --rc genhtml_function_coverage=1 00:43:05.264 --rc genhtml_legend=1 00:43:05.264 --rc geninfo_all_blocks=1 00:43:05.264 --rc geninfo_unexecuted_blocks=1 00:43:05.264 00:43:05.264 ' 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:05.264 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:05.265 05:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:43:05.265 Cannot find device "nvmf_init_br" 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:43:05.265 Cannot find device "nvmf_init_br2" 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:43:05.265 Cannot find device "nvmf_tgt_br" 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:43:05.265 Cannot find device "nvmf_tgt_br2" 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:43:05.265 Cannot find device "nvmf_init_br" 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:43:05.265 Cannot find device "nvmf_init_br2" 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:43:05.265 Cannot find device "nvmf_tgt_br" 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:43:05.265 Cannot find device "nvmf_tgt_br2" 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:43:05.265 Cannot find device "nvmf_br" 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:43:05.265 Cannot find device "nvmf_init_if" 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:43:05.265 Cannot find device "nvmf_init_if2" 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:05.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:05.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:43:05.265 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:05.524 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:43:05.525 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:05.525 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:43:05.525 00:43:05.525 --- 10.0.0.3 ping statistics --- 00:43:05.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.525 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:43:05.525 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:05.525 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:43:05.525 00:43:05.525 --- 10.0.0.4 ping statistics --- 00:43:05.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.525 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:05.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:05.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:43:05.525 00:43:05.525 --- 10.0.0.1 ping statistics --- 00:43:05.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.525 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:43:05.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:05.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:43:05.525 00:43:05.525 --- 10.0.0.2 ping statistics --- 00:43:05.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.525 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:05.525 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:05.784 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=102156 00:43:05.784 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 102156 00:43:05.784 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 102156 ']' 00:43:05.784 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:05.784 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:05.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:05.784 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:05.784 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:05.784 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:43:05.784 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:05.784 [2024-11-20 05:54:25.505856] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:05.784 [2024-11-20 05:54:25.507227] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:43:05.784 [2024-11-20 05:54:25.507302] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:05.784 [2024-11-20 05:54:25.657198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:06.043 [2024-11-20 05:54:25.737500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:06.043 [2024-11-20 05:54:25.737550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:06.043 [2024-11-20 05:54:25.737561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:06.044 [2024-11-20 05:54:25.737570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:06.044 [2024-11-20 05:54:25.737577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:06.044 [2024-11-20 05:54:25.739035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:06.044 [2024-11-20 05:54:25.739140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:06.044 [2024-11-20 05:54:25.739138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:06.044 [2024-11-20 05:54:25.872074] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:06.044 [2024-11-20 05:54:25.873024] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:06.044 [2024-11-20 05:54:25.873046] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:06.044 [2024-11-20 05:54:25.874185] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:06.612 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:06.612 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:43:06.612 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:06.612 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:06.612 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:06.872 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:06.872 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:07.131 [2024-11-20 05:54:26.816288] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:07.131 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:07.390 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:43:07.390 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:07.651 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:43:07.651 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:43:07.910 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:43:08.170 05:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6b96e0b1-940a-4af0-8e26-65fa034b8e9a 00:43:08.170 05:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6b96e0b1-940a-4af0-8e26-65fa034b8e9a lvol 20 00:43:08.430 05:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=53a594a8-524d-4e65-9d0b-a38acdf9e65a 00:43:08.430 05:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:43:08.689 05:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 53a594a8-524d-4e65-9d0b-a38acdf9e65a 00:43:08.947 05:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:43:09.206 [2024-11-20 05:54:28.888213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:09.206 05:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:43:09.206 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:43:09.206 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=102298 00:43:09.206 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:43:10.584 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 53a594a8-524d-4e65-9d0b-a38acdf9e65a MY_SNAPSHOT 00:43:10.584 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7fa8f0af-d882-4ccb-b9c8-20fa930f9734 00:43:10.584 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 53a594a8-524d-4e65-9d0b-a38acdf9e65a 30 00:43:11.153 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 7fa8f0af-d882-4ccb-b9c8-20fa930f9734 MY_CLONE 00:43:11.153 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a7f1fc9c-9c59-4417-b4d1-393208fffa22 00:43:11.153 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate a7f1fc9c-9c59-4417-b4d1-393208fffa22 00:43:12.090 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 102298 00:43:20.243 Initializing NVMe Controllers 00:43:20.243 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:43:20.243 Controller IO queue size 128, less than required. 00:43:20.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:20.243 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:43:20.244 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:43:20.244 Initialization complete. Launching workers. 00:43:20.244 ======================================================== 00:43:20.244 Latency(us) 00:43:20.244 Device Information : IOPS MiB/s Average min max 00:43:20.244 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8635.77 33.73 14834.82 4389.35 89105.47 00:43:20.244 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 6201.74 24.23 20654.49 4594.87 103967.42 00:43:20.244 ======================================================== 00:43:20.244 Total : 14837.51 57.96 17267.31 4389.35 103967.42 00:43:20.244 00:43:20.244 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:20.244 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 53a594a8-524d-4e65-9d0b-a38acdf9e65a 00:43:20.244 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6b96e0b1-940a-4af0-8e26-65fa034b8e9a 00:43:20.244 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:43:20.244 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:43:20.244 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:43:20.244 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:20.244 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:20.503 rmmod nvme_tcp 00:43:20.503 rmmod nvme_fabrics 00:43:20.503 rmmod nvme_keyring 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 102156 ']' 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 102156 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 102156 ']' 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 102156 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 102156 00:43:20.503 killing process with pid 102156 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 102156' 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 102156 00:43:20.503 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 102156 00:43:20.762 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:20.762 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:20.763 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:21.022 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:43:21.022 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:21.022 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:21.022 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:21.022 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:43:21.022 00:43:21.022 real 0m15.980s 00:43:21.022 user 0m54.119s 00:43:21.022 sys 0m6.982s 00:43:21.022 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:21.022 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:21.022 ************************************ 00:43:21.022 END TEST nvmf_lvol 00:43:21.022 ************************************ 00:43:21.022 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:43:21.022 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:43:21.022 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:21.022 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:21.022 ************************************ 00:43:21.022 START TEST nvmf_lvs_grow 00:43:21.022 ************************************ 00:43:21.022 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:43:21.022 * Looking for test storage... 00:43:21.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:21.022 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:43:21.022 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:43:21.022 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:43:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.284 --rc genhtml_branch_coverage=1 00:43:21.284 --rc genhtml_function_coverage=1 00:43:21.284 --rc genhtml_legend=1 00:43:21.284 --rc geninfo_all_blocks=1 00:43:21.284 --rc geninfo_unexecuted_blocks=1 00:43:21.284 00:43:21.284 ' 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:43:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.284 --rc genhtml_branch_coverage=1 00:43:21.284 --rc genhtml_function_coverage=1 00:43:21.284 --rc genhtml_legend=1 00:43:21.284 --rc geninfo_all_blocks=1 00:43:21.284 --rc geninfo_unexecuted_blocks=1 00:43:21.284 00:43:21.284 ' 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:43:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.284 --rc genhtml_branch_coverage=1 00:43:21.284 --rc genhtml_function_coverage=1 00:43:21.284 --rc genhtml_legend=1 00:43:21.284 --rc geninfo_all_blocks=1 00:43:21.284 --rc geninfo_unexecuted_blocks=1 00:43:21.284 00:43:21.284 ' 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:43:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:21.284 --rc genhtml_branch_coverage=1 00:43:21.284 --rc genhtml_function_coverage=1 00:43:21.284 --rc genhtml_legend=1 00:43:21.284 --rc geninfo_all_blocks=1 00:43:21.284 --rc geninfo_unexecuted_blocks=1 00:43:21.284 00:43:21.284 ' 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:21.284 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:43:21.284 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:21.284 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:21.284 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:21.284 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.284 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.284 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.284 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:43:21.285 Cannot find device "nvmf_init_br" 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:43:21.285 Cannot find device "nvmf_init_br2" 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:43:21.285 Cannot find device "nvmf_tgt_br" 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:43:21.285 Cannot find device "nvmf_tgt_br2" 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:43:21.285 Cannot find device "nvmf_init_br" 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:43:21.285 Cannot find device "nvmf_init_br2" 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:43:21.285 Cannot find device "nvmf_tgt_br" 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:43:21.285 Cannot find device "nvmf_tgt_br2" 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:43:21.285 Cannot find device "nvmf_br" 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:43:21.285 Cannot find device "nvmf_init_if" 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:43:21.285 Cannot find device "nvmf_init_if2" 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:43:21.285 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:43:21.285 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:43:21.285 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:43:21.545 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:43:21.545 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:43:21.545 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:43:21.545 00:43:21.545 --- 10.0.0.3 ping statistics --- 00:43:21.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:21.545 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:43:21.806 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:43:21.806 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:43:21.806 00:43:21.806 --- 10.0.0.4 ping statistics --- 00:43:21.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:21.806 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:43:21.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:21.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:43:21.806 00:43:21.806 --- 10.0.0.1 ping statistics --- 00:43:21.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:21.806 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:43:21.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:21.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:43:21.806 00:43:21.806 --- 10.0.0.2 ping statistics --- 00:43:21.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:21.806 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=102708 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 102708 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 102708 ']' 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:21.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:21.806 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:21.806 [2024-11-20 05:54:41.574044] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:21.806 [2024-11-20 05:54:41.575505] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:43:21.806 [2024-11-20 05:54:41.575577] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:22.066 [2024-11-20 05:54:41.731916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:22.066 [2024-11-20 05:54:41.842171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:22.066 [2024-11-20 05:54:41.842249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:22.066 [2024-11-20 05:54:41.842266] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:22.066 [2024-11-20 05:54:41.842280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:22.066 [2024-11-20 05:54:41.842292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:22.066 [2024-11-20 05:54:41.842852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:22.325 [2024-11-20 05:54:42.029135] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:22.325 [2024-11-20 05:54:42.029435] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:22.894 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:22.894 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:43:22.894 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:22.894 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:22.894 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:22.894 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:22.894 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:23.154 [2024-11-20 05:54:42.960058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:23.154 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:43:23.154 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:43:23.154 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:23.154 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:23.154 ************************************ 00:43:23.154 START TEST lvs_grow_clean 00:43:23.154 ************************************ 00:43:23.154 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:43:23.154 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:43:23.154 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:43:23.154 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:43:23.154 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:43:23.154 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:43:23.154 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:43:23.154 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:43:23.154 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:43:23.154 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:43:23.413 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:43:23.413 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:43:23.672 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=89f2bc63-d6e6-4879-a967-d516c91e7d01 00:43:23.672 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f2bc63-d6e6-4879-a967-d516c91e7d01 00:43:23.672 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:43:23.932 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:43:23.932 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:43:23.932 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 89f2bc63-d6e6-4879-a967-d516c91e7d01 lvol 150 00:43:24.192 05:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=025bef03-2295-4f73-9b79-d3ff197065bc 00:43:24.192 05:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:43:24.192 05:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:43:24.451 [2024-11-20 05:54:44.287780] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:43:24.451 [2024-11-20 05:54:44.287954] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:43:24.451 true 00:43:24.451 05:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f2bc63-d6e6-4879-a967-d516c91e7d01 00:43:24.451 05:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:43:24.711 05:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:43:24.711 05:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:43:24.970 05:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 025bef03-2295-4f73-9b79-d3ff197065bc 00:43:25.230 05:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:43:25.489 [2024-11-20 05:54:45.244420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:25.489 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:43:25.748 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=102869 00:43:25.748 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:25.748 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:43:25.748 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 102869 /var/tmp/bdevperf.sock 00:43:25.748 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 102869 ']' 00:43:25.748 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:25.748 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:25.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:25.748 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:25.748 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:25.748 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:43:25.748 [2024-11-20 05:54:45.516673] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:43:25.748 [2024-11-20 05:54:45.516781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102869 ] 00:43:26.007 [2024-11-20 05:54:45.674907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:26.007 [2024-11-20 05:54:45.739117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:26.944 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:26.944 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:43:26.944 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:43:26.944 Nvme0n1 00:43:26.944 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:43:27.204 [ 00:43:27.204 { 00:43:27.204 "aliases": [ 00:43:27.204 "025bef03-2295-4f73-9b79-d3ff197065bc" 00:43:27.204 ], 00:43:27.204 "assigned_rate_limits": { 00:43:27.204 "r_mbytes_per_sec": 0, 00:43:27.204 "rw_ios_per_sec": 0, 00:43:27.204 "rw_mbytes_per_sec": 0, 00:43:27.204 "w_mbytes_per_sec": 0 00:43:27.204 }, 00:43:27.204 "block_size": 4096, 00:43:27.204 "claimed": false, 00:43:27.204 "driver_specific": { 00:43:27.204 "mp_policy": "active_passive", 00:43:27.204 "nvme": [ 00:43:27.204 { 00:43:27.204 "ctrlr_data": { 00:43:27.204 "ana_reporting": false, 00:43:27.204 "cntlid": 1, 00:43:27.204 "firmware_revision": "25.01", 00:43:27.204 "model_number": "SPDK bdev Controller", 00:43:27.204 "multi_ctrlr": true, 00:43:27.204 "oacs": { 00:43:27.204 "firmware": 0, 00:43:27.204 "format": 0, 00:43:27.204 "ns_manage": 0, 00:43:27.204 "security": 0 00:43:27.204 }, 00:43:27.204 "serial_number": "SPDK0", 00:43:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:27.204 "vendor_id": "0x8086" 00:43:27.204 }, 00:43:27.204 "ns_data": { 00:43:27.204 "can_share": true, 00:43:27.204 "id": 1 00:43:27.204 }, 00:43:27.204 "trid": { 00:43:27.204 "adrfam": "IPv4", 00:43:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:27.204 "traddr": "10.0.0.3", 00:43:27.204 "trsvcid": "4420", 00:43:27.204 "trtype": "TCP" 00:43:27.204 }, 00:43:27.204 "vs": { 00:43:27.204 "nvme_version": "1.3" 00:43:27.204 } 00:43:27.204 } 00:43:27.204 ] 00:43:27.204 }, 00:43:27.204 "memory_domains": [ 00:43:27.204 { 00:43:27.204 "dma_device_id": "system", 00:43:27.204 "dma_device_type": 1 00:43:27.204 } 00:43:27.204 ], 00:43:27.204 "name": "Nvme0n1", 00:43:27.204 "num_blocks": 38912, 00:43:27.204 "numa_id": -1, 00:43:27.204 "product_name": "NVMe disk", 00:43:27.204 "supported_io_types": { 00:43:27.204 "abort": true, 00:43:27.204 "compare": true, 00:43:27.204 "compare_and_write": true, 00:43:27.204 "copy": true, 00:43:27.204 "flush": true, 00:43:27.204 "get_zone_info": false, 00:43:27.204 "nvme_admin": true, 00:43:27.204 "nvme_io": true, 00:43:27.204 "nvme_io_md": false, 00:43:27.204 "nvme_iov_md": false, 00:43:27.204 "read": true, 00:43:27.204 "reset": true, 00:43:27.204 "seek_data": false, 00:43:27.204 "seek_hole": false, 00:43:27.204 "unmap": true, 00:43:27.204 "write": true, 00:43:27.204 "write_zeroes": true, 00:43:27.204 "zcopy": false, 00:43:27.204 "zone_append": false, 00:43:27.204 "zone_management": false 00:43:27.204 }, 00:43:27.204 "uuid": "025bef03-2295-4f73-9b79-d3ff197065bc", 00:43:27.204 "zoned": false 00:43:27.204 } 00:43:27.204 ] 00:43:27.204 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:43:27.204 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=102917 00:43:27.204 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:43:27.204 Running I/O for 10 seconds... 00:43:28.143 Latency(us) 00:43:28.143 [2024-11-20T05:54:48.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:28.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:28.143 Nvme0n1 : 1.00 9381.00 36.64 0.00 0.00 0.00 0.00 0.00 00:43:28.143 [2024-11-20T05:54:48.062Z] =================================================================================================================== 00:43:28.143 [2024-11-20T05:54:48.062Z] Total : 9381.00 36.64 0.00 0.00 0.00 0.00 0.00 00:43:28.143 00:43:29.080 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 89f2bc63-d6e6-4879-a967-d516c91e7d01 00:43:29.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:29.339 Nvme0n1 : 2.00 10385.00 40.57 0.00 0.00 0.00 0.00 0.00 00:43:29.339 [2024-11-20T05:54:49.258Z] =================================================================================================================== 00:43:29.339 [2024-11-20T05:54:49.258Z] Total : 10385.00 40.57 0.00 0.00 0.00 0.00 0.00 00:43:29.339 00:43:29.339 true 00:43:29.596 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f2bc63-d6e6-4879-a967-d516c91e7d01 00:43:29.597 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:43:29.859 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:43:29.859 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:43:29.859 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 102917 00:43:30.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:30.164 Nvme0n1 : 3.00 10589.67 41.37 0.00 0.00 0.00 0.00 0.00 00:43:30.164 [2024-11-20T05:54:50.083Z] =================================================================================================================== 00:43:30.164 [2024-11-20T05:54:50.083Z] Total : 10589.67 41.37 0.00 0.00 0.00 0.00 0.00 00:43:30.164 00:43:31.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:31.563 Nvme0n1 : 4.00 10691.00 41.76 0.00 0.00 0.00 0.00 0.00 00:43:31.563 [2024-11-20T05:54:51.482Z] =================================================================================================================== 00:43:31.563 [2024-11-20T05:54:51.482Z] Total : 10691.00 41.76 0.00 0.00 0.00 0.00 0.00 00:43:31.563 00:43:32.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:32.130 Nvme0n1 : 5.00 10788.20 42.14 0.00 0.00 0.00 0.00 0.00 00:43:32.130 [2024-11-20T05:54:52.049Z] =================================================================================================================== 00:43:32.130 [2024-11-20T05:54:52.049Z] Total : 10788.20 42.14 0.00 0.00 0.00 0.00 0.00 00:43:32.130 00:43:33.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:33.503 Nvme0n1 : 6.00 10804.67 42.21 0.00 0.00 0.00 0.00 0.00 00:43:33.503 [2024-11-20T05:54:53.422Z] =================================================================================================================== 00:43:33.503 [2024-11-20T05:54:53.422Z] Total : 10804.67 42.21 0.00 0.00 0.00 0.00 0.00 00:43:33.503 00:43:34.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:34.440 Nvme0n1 : 7.00 10796.71 42.17 0.00 0.00 0.00 0.00 0.00 00:43:34.440 [2024-11-20T05:54:54.359Z] =================================================================================================================== 00:43:34.440 [2024-11-20T05:54:54.359Z] Total : 10796.71 42.17 0.00 0.00 0.00 0.00 0.00 00:43:34.440 00:43:35.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:35.375 Nvme0n1 : 8.00 10762.88 42.04 0.00 0.00 0.00 0.00 0.00 00:43:35.375 [2024-11-20T05:54:55.294Z] =================================================================================================================== 00:43:35.375 [2024-11-20T05:54:55.294Z] Total : 10762.88 42.04 0.00 0.00 0.00 0.00 0.00 00:43:35.375 00:43:36.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:36.310 Nvme0n1 : 9.00 10739.56 41.95 0.00 0.00 0.00 0.00 0.00 00:43:36.310 [2024-11-20T05:54:56.229Z] =================================================================================================================== 00:43:36.310 [2024-11-20T05:54:56.229Z] Total : 10739.56 41.95 0.00 0.00 0.00 0.00 0.00 00:43:36.310 00:43:37.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:37.246 Nvme0n1 : 10.00 10704.40 41.81 0.00 0.00 0.00 0.00 0.00 00:43:37.246 [2024-11-20T05:54:57.165Z] =================================================================================================================== 00:43:37.246 [2024-11-20T05:54:57.165Z] Total : 10704.40 41.81 0.00 0.00 0.00 0.00 0.00 00:43:37.246 00:43:37.246 00:43:37.246 Latency(us) 00:43:37.246 [2024-11-20T05:54:57.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:37.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:37.246 Nvme0n1 : 10.01 10710.60 41.84 0.00 0.00 11945.75 5398.92 40944.40 00:43:37.246 [2024-11-20T05:54:57.165Z] =================================================================================================================== 00:43:37.246 [2024-11-20T05:54:57.165Z] Total : 10710.60 41.84 0.00 0.00 11945.75 5398.92 40944.40 00:43:37.246 { 00:43:37.246 "results": [ 00:43:37.246 { 00:43:37.246 "job": "Nvme0n1", 00:43:37.246 "core_mask": "0x2", 00:43:37.246 "workload": "randwrite", 00:43:37.246 "status": "finished", 00:43:37.246 "queue_depth": 128, 00:43:37.246 "io_size": 4096, 00:43:37.246 "runtime": 10.006165, 00:43:37.246 "iops": 10710.596917000668, 00:43:37.246 "mibps": 41.83826920703386, 00:43:37.246 "io_failed": 0, 00:43:37.246 "io_timeout": 0, 00:43:37.246 "avg_latency_us": 11945.752251280985, 00:43:37.246 "min_latency_us": 5398.918095238095, 00:43:37.246 "max_latency_us": 40944.39619047619 00:43:37.246 } 00:43:37.246 ], 00:43:37.246 "core_count": 1 00:43:37.246 } 00:43:37.246 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 102869 00:43:37.246 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 102869 ']' 00:43:37.246 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 102869 00:43:37.246 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:43:37.246 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:37.246 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 102869 00:43:37.246 killing process with pid 102869 00:43:37.246 Received shutdown signal, test time was about 10.000000 seconds 00:43:37.246 00:43:37.246 Latency(us) 00:43:37.246 [2024-11-20T05:54:57.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:37.246 [2024-11-20T05:54:57.165Z] =================================================================================================================== 00:43:37.246 [2024-11-20T05:54:57.165Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:37.246 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:43:37.246 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:43:37.246 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 102869' 00:43:37.246 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 102869 00:43:37.246 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 102869 00:43:37.505 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:43:37.763 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:38.022 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f2bc63-d6e6-4879-a967-d516c91e7d01 00:43:38.022 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:43:38.281 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:43:38.281 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:43:38.281 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:43:38.541 [2024-11-20 05:54:58.279760] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:43:38.541 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f2bc63-d6e6-4879-a967-d516c91e7d01 00:43:38.541 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:43:38.541 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f2bc63-d6e6-4879-a967-d516c91e7d01 00:43:38.541 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:38.541 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:38.541 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:38.541 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:38.541 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:38.541 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:38.541 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:38.541 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:43:38.541 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f2bc63-d6e6-4879-a967-d516c91e7d01 00:43:38.800 2024/11/20 05:54:58 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:89f2bc63-d6e6-4879-a967-d516c91e7d01], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:43:38.800 request: 00:43:38.800 { 00:43:38.800 "method": "bdev_lvol_get_lvstores", 00:43:38.800 "params": { 00:43:38.800 "uuid": "89f2bc63-d6e6-4879-a967-d516c91e7d01" 00:43:38.800 } 00:43:38.800 } 00:43:38.800 Got JSON-RPC error response 00:43:38.800 GoRPCClient: error on JSON-RPC call 00:43:38.800 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:43:38.800 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:38.800 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:38.800 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:38.800 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:43:39.059 aio_bdev 00:43:39.059 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 025bef03-2295-4f73-9b79-d3ff197065bc 00:43:39.059 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=025bef03-2295-4f73-9b79-d3ff197065bc 00:43:39.059 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:43:39.059 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:43:39.059 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:43:39.059 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:43:39.059 05:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:43:39.319 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 025bef03-2295-4f73-9b79-d3ff197065bc -t 2000 00:43:39.578 [ 00:43:39.578 { 00:43:39.578 "aliases": [ 00:43:39.578 "lvs/lvol" 00:43:39.578 ], 00:43:39.578 "assigned_rate_limits": { 00:43:39.578 "r_mbytes_per_sec": 0, 00:43:39.578 "rw_ios_per_sec": 0, 00:43:39.578 "rw_mbytes_per_sec": 0, 00:43:39.578 "w_mbytes_per_sec": 0 00:43:39.578 }, 00:43:39.578 "block_size": 4096, 00:43:39.578 "claimed": false, 00:43:39.578 "driver_specific": { 00:43:39.578 "lvol": { 00:43:39.578 "base_bdev": "aio_bdev", 00:43:39.578 "clone": false, 00:43:39.578 "esnap_clone": false, 00:43:39.578 "lvol_store_uuid": "89f2bc63-d6e6-4879-a967-d516c91e7d01", 00:43:39.578 "num_allocated_clusters": 38, 00:43:39.578 "snapshot": false, 00:43:39.578 "thin_provision": false 00:43:39.578 } 00:43:39.578 }, 00:43:39.578 "name": "025bef03-2295-4f73-9b79-d3ff197065bc", 00:43:39.578 "num_blocks": 38912, 00:43:39.578 "product_name": "Logical Volume", 00:43:39.578 "supported_io_types": { 00:43:39.578 "abort": false, 00:43:39.578 "compare": false, 00:43:39.578 "compare_and_write": false, 00:43:39.578 "copy": false, 00:43:39.578 "flush": false, 00:43:39.578 "get_zone_info": false, 00:43:39.578 "nvme_admin": false, 00:43:39.578 "nvme_io": false, 00:43:39.578 "nvme_io_md": false, 00:43:39.578 "nvme_iov_md": false, 00:43:39.578 "read": true, 00:43:39.578 "reset": true, 00:43:39.578 "seek_data": true, 00:43:39.578 "seek_hole": true, 00:43:39.578 "unmap": true, 00:43:39.578 "write": true, 00:43:39.578 "write_zeroes": true, 00:43:39.578 "zcopy": false, 00:43:39.578 "zone_append": false, 00:43:39.578 "zone_management": false 00:43:39.578 }, 00:43:39.578 "uuid": "025bef03-2295-4f73-9b79-d3ff197065bc", 00:43:39.578 "zoned": false 00:43:39.578 } 00:43:39.578 ] 00:43:39.578 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:43:39.578 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f2bc63-d6e6-4879-a967-d516c91e7d01 00:43:39.578 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:43:39.838 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:43:39.838 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f2bc63-d6e6-4879-a967-d516c91e7d01 00:43:39.838 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:43:40.097 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:43:40.098 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 025bef03-2295-4f73-9b79-d3ff197065bc 00:43:40.357 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 89f2bc63-d6e6-4879-a967-d516c91e7d01 00:43:40.616 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:43:40.875 05:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:43:41.444 ************************************ 00:43:41.444 END TEST lvs_grow_clean 00:43:41.444 ************************************ 00:43:41.444 00:43:41.444 real 0m18.274s 00:43:41.444 user 0m16.396s 00:43:41.444 sys 0m3.246s 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:41.444 ************************************ 00:43:41.444 START TEST lvs_grow_dirty 00:43:41.444 ************************************ 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:43:41.444 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:43:42.013 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:43:42.013 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:43:42.013 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c4af8862-a8b0-45fb-a88e-898838b1e46d 00:43:42.013 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4af8862-a8b0-45fb-a88e-898838b1e46d 00:43:42.013 05:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:43:42.273 05:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:43:42.273 05:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:43:42.273 05:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c4af8862-a8b0-45fb-a88e-898838b1e46d lvol 150 00:43:42.532 05:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8ba72633-c968-4645-ac02-06dbea9d1503 00:43:42.532 05:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:43:42.532 05:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:43:42.792 [2024-11-20 05:55:02.483795] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:43:42.792 [2024-11-20 05:55:02.483964] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:43:42.792 true 00:43:42.792 05:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4af8862-a8b0-45fb-a88e-898838b1e46d 00:43:42.792 05:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:43:43.051 05:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:43:43.051 05:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:43:43.310 05:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8ba72633-c968-4645-ac02-06dbea9d1503 00:43:43.310 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:43:43.569 [2024-11-20 05:55:03.400113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:43:43.569 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:43:43.828 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=103305 00:43:43.828 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:43:43.828 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:43.828 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 103305 /var/tmp/bdevperf.sock 00:43:43.828 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 103305 ']' 00:43:43.828 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:43.828 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:43.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:43.828 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:43.828 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:43.828 05:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:43:43.828 [2024-11-20 05:55:03.649715] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:43:43.828 [2024-11-20 05:55:03.649788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103305 ] 00:43:44.088 [2024-11-20 05:55:03.799836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:44.088 [2024-11-20 05:55:03.867131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:44.347 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:44.347 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:43:44.347 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:43:44.347 Nvme0n1 00:43:44.607 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:43:44.607 [ 00:43:44.607 { 00:43:44.607 "aliases": [ 00:43:44.607 "8ba72633-c968-4645-ac02-06dbea9d1503" 00:43:44.607 ], 00:43:44.607 "assigned_rate_limits": { 00:43:44.607 "r_mbytes_per_sec": 0, 00:43:44.607 "rw_ios_per_sec": 0, 00:43:44.607 "rw_mbytes_per_sec": 0, 00:43:44.607 "w_mbytes_per_sec": 0 00:43:44.607 }, 00:43:44.607 "block_size": 4096, 00:43:44.607 "claimed": false, 00:43:44.607 "driver_specific": { 00:43:44.607 "mp_policy": "active_passive", 00:43:44.607 "nvme": [ 00:43:44.607 { 00:43:44.607 "ctrlr_data": { 00:43:44.607 "ana_reporting": false, 00:43:44.607 "cntlid": 1, 00:43:44.607 "firmware_revision": "25.01", 00:43:44.607 "model_number": "SPDK bdev Controller", 00:43:44.607 "multi_ctrlr": true, 00:43:44.607 "oacs": { 00:43:44.607 "firmware": 0, 00:43:44.607 "format": 0, 00:43:44.607 "ns_manage": 0, 00:43:44.607 "security": 0 00:43:44.607 }, 00:43:44.607 "serial_number": "SPDK0", 00:43:44.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:44.607 "vendor_id": "0x8086" 00:43:44.607 }, 00:43:44.607 "ns_data": { 00:43:44.607 "can_share": true, 00:43:44.607 "id": 1 00:43:44.607 }, 00:43:44.607 "trid": { 00:43:44.607 "adrfam": "IPv4", 00:43:44.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:44.607 "traddr": "10.0.0.3", 00:43:44.607 "trsvcid": "4420", 00:43:44.607 "trtype": "TCP" 00:43:44.607 }, 00:43:44.607 "vs": { 00:43:44.607 "nvme_version": "1.3" 00:43:44.607 } 00:43:44.607 } 00:43:44.607 ] 00:43:44.607 }, 00:43:44.607 "memory_domains": [ 00:43:44.607 { 00:43:44.607 "dma_device_id": "system", 00:43:44.607 "dma_device_type": 1 00:43:44.607 } 00:43:44.607 ], 00:43:44.607 "name": "Nvme0n1", 00:43:44.607 "num_blocks": 38912, 00:43:44.607 "numa_id": -1, 00:43:44.607 "product_name": "NVMe disk", 00:43:44.607 "supported_io_types": { 00:43:44.607 "abort": true, 00:43:44.607 "compare": true, 00:43:44.607 "compare_and_write": true, 00:43:44.607 "copy": true, 00:43:44.607 "flush": true, 00:43:44.607 "get_zone_info": false, 00:43:44.607 "nvme_admin": true, 00:43:44.607 "nvme_io": true, 00:43:44.607 "nvme_io_md": false, 00:43:44.607 "nvme_iov_md": false, 00:43:44.607 "read": true, 00:43:44.607 "reset": true, 00:43:44.607 "seek_data": false, 00:43:44.607 "seek_hole": false, 00:43:44.607 "unmap": true, 00:43:44.607 "write": true, 00:43:44.607 "write_zeroes": true, 00:43:44.607 "zcopy": false, 00:43:44.607 "zone_append": false, 00:43:44.607 "zone_management": false 00:43:44.607 }, 00:43:44.607 "uuid": "8ba72633-c968-4645-ac02-06dbea9d1503", 00:43:44.607 "zoned": false 00:43:44.607 } 00:43:44.607 ] 00:43:44.607 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:43:44.607 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=103338 00:43:44.607 05:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:43:44.867 Running I/O for 10 seconds... 00:43:45.804 Latency(us) 00:43:45.804 [2024-11-20T05:55:05.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:45.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:45.804 Nvme0n1 : 1.00 10188.00 39.80 0.00 0.00 0.00 0.00 0.00 00:43:45.804 [2024-11-20T05:55:05.723Z] =================================================================================================================== 00:43:45.804 [2024-11-20T05:55:05.723Z] Total : 10188.00 39.80 0.00 0.00 0.00 0.00 0.00 00:43:45.804 00:43:46.742 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c4af8862-a8b0-45fb-a88e-898838b1e46d 00:43:46.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:46.742 Nvme0n1 : 2.00 10879.50 42.50 0.00 0.00 0.00 0.00 0.00 00:43:46.742 [2024-11-20T05:55:06.661Z] =================================================================================================================== 00:43:46.742 [2024-11-20T05:55:06.661Z] Total : 10879.50 42.50 0.00 0.00 0.00 0.00 0.00 00:43:46.742 00:43:47.001 true 00:43:47.001 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4af8862-a8b0-45fb-a88e-898838b1e46d 00:43:47.001 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:43:47.260 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:43:47.260 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:43:47.260 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 103338 00:43:47.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:47.827 Nvme0n1 : 3.00 10948.33 42.77 0.00 0.00 0.00 0.00 0.00 00:43:47.827 [2024-11-20T05:55:07.746Z] =================================================================================================================== 00:43:47.827 [2024-11-20T05:55:07.746Z] Total : 10948.33 42.77 0.00 0.00 0.00 0.00 0.00 00:43:47.827 00:43:48.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:48.762 Nvme0n1 : 4.00 10955.25 42.79 0.00 0.00 0.00 0.00 0.00 00:43:48.762 [2024-11-20T05:55:08.681Z] =================================================================================================================== 00:43:48.762 [2024-11-20T05:55:08.682Z] Total : 10955.25 42.79 0.00 0.00 0.00 0.00 0.00 00:43:48.763 00:43:49.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:49.699 Nvme0n1 : 5.00 10838.60 42.34 0.00 0.00 0.00 0.00 0.00 00:43:49.699 [2024-11-20T05:55:09.618Z] =================================================================================================================== 00:43:49.699 [2024-11-20T05:55:09.618Z] Total : 10838.60 42.34 0.00 0.00 0.00 0.00 0.00 00:43:49.699 00:43:50.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:50.638 Nvme0n1 : 6.00 10867.83 42.45 0.00 0.00 0.00 0.00 0.00 00:43:50.638 [2024-11-20T05:55:10.557Z] =================================================================================================================== 00:43:50.638 [2024-11-20T05:55:10.557Z] Total : 10867.83 42.45 0.00 0.00 0.00 0.00 0.00 00:43:50.638 00:43:52.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:52.018 Nvme0n1 : 7.00 10872.57 42.47 0.00 0.00 0.00 0.00 0.00 00:43:52.018 [2024-11-20T05:55:11.937Z] =================================================================================================================== 00:43:52.018 [2024-11-20T05:55:11.937Z] Total : 10872.57 42.47 0.00 0.00 0.00 0.00 0.00 00:43:52.018 00:43:52.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:52.956 Nvme0n1 : 8.00 10753.88 42.01 0.00 0.00 0.00 0.00 0.00 00:43:52.956 [2024-11-20T05:55:12.875Z] =================================================================================================================== 00:43:52.956 [2024-11-20T05:55:12.875Z] Total : 10753.88 42.01 0.00 0.00 0.00 0.00 0.00 00:43:52.956 00:43:53.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:53.891 Nvme0n1 : 9.00 10753.44 42.01 0.00 0.00 0.00 0.00 0.00 00:43:53.891 [2024-11-20T05:55:13.810Z] =================================================================================================================== 00:43:53.891 [2024-11-20T05:55:13.810Z] Total : 10753.44 42.01 0.00 0.00 0.00 0.00 0.00 00:43:53.891 00:43:54.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:54.828 Nvme0n1 : 10.00 10696.90 41.78 0.00 0.00 0.00 0.00 0.00 00:43:54.828 [2024-11-20T05:55:14.747Z] =================================================================================================================== 00:43:54.828 [2024-11-20T05:55:14.747Z] Total : 10696.90 41.78 0.00 0.00 0.00 0.00 0.00 00:43:54.828 00:43:54.828 00:43:54.828 Latency(us) 00:43:54.828 [2024-11-20T05:55:14.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:54.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:54.828 Nvme0n1 : 10.01 10698.01 41.79 0.00 0.00 11960.89 5274.09 100863.02 00:43:54.828 [2024-11-20T05:55:14.747Z] =================================================================================================================== 00:43:54.828 [2024-11-20T05:55:14.747Z] Total : 10698.01 41.79 0.00 0.00 11960.89 5274.09 100863.02 00:43:54.828 { 00:43:54.828 "results": [ 00:43:54.828 { 00:43:54.828 "job": "Nvme0n1", 00:43:54.828 "core_mask": "0x2", 00:43:54.828 "workload": "randwrite", 00:43:54.828 "status": "finished", 00:43:54.828 "queue_depth": 128, 00:43:54.828 "io_size": 4096, 00:43:54.828 "runtime": 10.010929, 00:43:54.828 "iops": 10698.008146896256, 00:43:54.828 "mibps": 41.7890943238135, 00:43:54.828 "io_failed": 0, 00:43:54.828 "io_timeout": 0, 00:43:54.828 "avg_latency_us": 11960.886692037526, 00:43:54.828 "min_latency_us": 5274.087619047619, 00:43:54.828 "max_latency_us": 100863.02476190476 00:43:54.828 } 00:43:54.828 ], 00:43:54.828 "core_count": 1 00:43:54.828 } 00:43:54.828 05:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 103305 00:43:54.828 05:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 103305 ']' 00:43:54.828 05:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 103305 00:43:54.828 05:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:43:54.828 05:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:43:54.828 05:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 103305 00:43:54.828 05:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:43:54.828 05:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:43:54.828 killing process with pid 103305 00:43:54.828 05:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 103305' 00:43:54.828 Received shutdown signal, test time was about 10.000000 seconds 00:43:54.828 00:43:54.828 Latency(us) 00:43:54.828 [2024-11-20T05:55:14.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:54.828 [2024-11-20T05:55:14.747Z] =================================================================================================================== 00:43:54.828 [2024-11-20T05:55:14.747Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:54.828 05:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 103305 00:43:54.828 05:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 103305 00:43:55.087 05:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:43:55.087 05:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:55.346 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4af8862-a8b0-45fb-a88e-898838b1e46d 00:43:55.346 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:43:55.606 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:43:55.606 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:43:55.606 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 102708 00:43:55.606 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 102708 00:43:55.866 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 102708 Killed "${NVMF_APP[@]}" "$@" 00:43:55.866 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:43:55.866 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:43:55.866 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:55.866 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:55.866 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:43:55.866 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=103486 00:43:55.866 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:43:55.866 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 103486 00:43:55.866 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 103486 ']' 00:43:55.866 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:55.866 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:43:55.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:55.866 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:55.866 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:43:55.866 05:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:43:55.866 [2024-11-20 05:55:15.606269] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:55.866 [2024-11-20 05:55:15.607700] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:43:55.866 [2024-11-20 05:55:15.607774] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:55.866 [2024-11-20 05:55:15.760603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:56.166 [2024-11-20 05:55:15.826160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:56.166 [2024-11-20 05:55:15.826213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:56.166 [2024-11-20 05:55:15.826224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:56.166 [2024-11-20 05:55:15.826232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:56.166 [2024-11-20 05:55:15.826239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:56.166 [2024-11-20 05:55:15.826623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:56.166 [2024-11-20 05:55:15.961277] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:56.166 [2024-11-20 05:55:15.961585] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:56.760 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:43:56.760 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:43:56.760 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:56.760 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:56.760 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:43:56.760 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:56.760 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:43:57.020 [2024-11-20 05:55:16.758156] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:43:57.020 [2024-11-20 05:55:16.759161] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:43:57.020 [2024-11-20 05:55:16.759660] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:43:57.020 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:43:57.020 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8ba72633-c968-4645-ac02-06dbea9d1503 00:43:57.020 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=8ba72633-c968-4645-ac02-06dbea9d1503 00:43:57.020 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:43:57.020 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:43:57.020 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:43:57.020 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:43:57.020 05:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:43:57.279 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8ba72633-c968-4645-ac02-06dbea9d1503 -t 2000 00:43:57.279 [ 00:43:57.279 { 00:43:57.279 "aliases": [ 00:43:57.279 "lvs/lvol" 00:43:57.279 ], 00:43:57.279 "assigned_rate_limits": { 00:43:57.279 "r_mbytes_per_sec": 0, 00:43:57.279 "rw_ios_per_sec": 0, 00:43:57.279 "rw_mbytes_per_sec": 0, 00:43:57.279 "w_mbytes_per_sec": 0 00:43:57.279 }, 00:43:57.279 "block_size": 4096, 00:43:57.279 "claimed": false, 00:43:57.279 "driver_specific": { 00:43:57.279 "lvol": { 00:43:57.279 "base_bdev": "aio_bdev", 00:43:57.279 "clone": false, 00:43:57.279 "esnap_clone": false, 00:43:57.279 "lvol_store_uuid": "c4af8862-a8b0-45fb-a88e-898838b1e46d", 00:43:57.279 "num_allocated_clusters": 38, 00:43:57.279 "snapshot": false, 00:43:57.279 "thin_provision": false 00:43:57.279 } 00:43:57.279 }, 00:43:57.279 "name": "8ba72633-c968-4645-ac02-06dbea9d1503", 00:43:57.279 "num_blocks": 38912, 00:43:57.279 "product_name": "Logical Volume", 00:43:57.279 "supported_io_types": { 00:43:57.279 "abort": false, 00:43:57.279 "compare": false, 00:43:57.279 "compare_and_write": false, 00:43:57.279 "copy": false, 00:43:57.279 "flush": false, 00:43:57.279 "get_zone_info": false, 00:43:57.279 "nvme_admin": false, 00:43:57.279 "nvme_io": false, 00:43:57.279 "nvme_io_md": false, 00:43:57.279 "nvme_iov_md": false, 00:43:57.279 "read": true, 00:43:57.279 "reset": true, 00:43:57.279 "seek_data": true, 00:43:57.279 "seek_hole": true, 00:43:57.279 "unmap": true, 00:43:57.279 "write": true, 00:43:57.279 "write_zeroes": true, 00:43:57.279 "zcopy": false, 00:43:57.279 "zone_append": false, 00:43:57.279 "zone_management": false 00:43:57.279 }, 00:43:57.279 "uuid": "8ba72633-c968-4645-ac02-06dbea9d1503", 00:43:57.279 "zoned": false 00:43:57.279 } 00:43:57.279 ] 00:43:57.539 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:43:57.539 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4af8862-a8b0-45fb-a88e-898838b1e46d 00:43:57.539 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:43:57.799 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:43:57.799 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4af8862-a8b0-45fb-a88e-898838b1e46d 00:43:57.799 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:43:58.058 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:43:58.058 05:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:43:58.318 [2024-11-20 05:55:18.007985] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:43:58.318 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4af8862-a8b0-45fb-a88e-898838b1e46d 00:43:58.318 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:43:58.318 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4af8862-a8b0-45fb-a88e-898838b1e46d 00:43:58.318 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:58.318 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:58.318 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:58.318 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:58.318 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:58.318 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:43:58.318 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:58.318 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:43:58.318 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4af8862-a8b0-45fb-a88e-898838b1e46d 00:43:58.577 2024/11/20 05:55:18 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:c4af8862-a8b0-45fb-a88e-898838b1e46d], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:43:58.577 request: 00:43:58.577 { 00:43:58.577 "method": "bdev_lvol_get_lvstores", 00:43:58.577 "params": { 00:43:58.577 "uuid": "c4af8862-a8b0-45fb-a88e-898838b1e46d" 00:43:58.577 } 00:43:58.577 } 00:43:58.577 Got JSON-RPC error response 00:43:58.577 GoRPCClient: error on JSON-RPC call 00:43:58.577 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:43:58.577 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:43:58.577 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:43:58.577 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:43:58.577 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:43:58.577 aio_bdev 00:43:58.577 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8ba72633-c968-4645-ac02-06dbea9d1503 00:43:58.577 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=8ba72633-c968-4645-ac02-06dbea9d1503 00:43:58.577 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:43:58.577 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:43:58.577 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:43:58.577 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:43:58.578 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:43:58.837 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8ba72633-c968-4645-ac02-06dbea9d1503 -t 2000 00:43:59.096 [ 00:43:59.096 { 00:43:59.096 "aliases": [ 00:43:59.096 "lvs/lvol" 00:43:59.096 ], 00:43:59.096 "assigned_rate_limits": { 00:43:59.096 "r_mbytes_per_sec": 0, 00:43:59.096 "rw_ios_per_sec": 0, 00:43:59.096 "rw_mbytes_per_sec": 0, 00:43:59.096 "w_mbytes_per_sec": 0 00:43:59.096 }, 00:43:59.096 "block_size": 4096, 00:43:59.096 "claimed": false, 00:43:59.096 "driver_specific": { 00:43:59.096 "lvol": { 00:43:59.096 "base_bdev": "aio_bdev", 00:43:59.096 "clone": false, 00:43:59.096 "esnap_clone": false, 00:43:59.096 "lvol_store_uuid": "c4af8862-a8b0-45fb-a88e-898838b1e46d", 00:43:59.096 "num_allocated_clusters": 38, 00:43:59.096 "snapshot": false, 00:43:59.096 "thin_provision": false 00:43:59.096 } 00:43:59.096 }, 00:43:59.096 "name": "8ba72633-c968-4645-ac02-06dbea9d1503", 00:43:59.096 "num_blocks": 38912, 00:43:59.096 "product_name": "Logical Volume", 00:43:59.096 "supported_io_types": { 00:43:59.096 "abort": false, 00:43:59.096 "compare": false, 00:43:59.096 "compare_and_write": false, 00:43:59.096 "copy": false, 00:43:59.096 "flush": false, 00:43:59.096 "get_zone_info": false, 00:43:59.096 "nvme_admin": false, 00:43:59.096 "nvme_io": false, 00:43:59.096 "nvme_io_md": false, 00:43:59.096 "nvme_iov_md": false, 00:43:59.096 "read": true, 00:43:59.096 "reset": true, 00:43:59.096 "seek_data": true, 00:43:59.097 "seek_hole": true, 00:43:59.097 "unmap": true, 00:43:59.097 "write": true, 00:43:59.097 "write_zeroes": true, 00:43:59.097 "zcopy": false, 00:43:59.097 "zone_append": false, 00:43:59.097 "zone_management": false 00:43:59.097 }, 00:43:59.097 "uuid": "8ba72633-c968-4645-ac02-06dbea9d1503", 00:43:59.097 "zoned": false 00:43:59.097 } 00:43:59.097 ] 00:43:59.097 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:43:59.097 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4af8862-a8b0-45fb-a88e-898838b1e46d 00:43:59.097 05:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:43:59.356 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:43:59.356 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4af8862-a8b0-45fb-a88e-898838b1e46d 00:43:59.356 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:43:59.614 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:43:59.614 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8ba72633-c968-4645-ac02-06dbea9d1503 00:43:59.614 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c4af8862-a8b0-45fb-a88e-898838b1e46d 00:43:59.873 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:44:00.132 05:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:44:00.700 00:44:00.700 real 0m19.166s 00:44:00.700 user 0m25.061s 00:44:00.700 sys 0m8.207s 00:44:00.700 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:00.700 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:00.700 ************************************ 00:44:00.700 END TEST lvs_grow_dirty 00:44:00.700 ************************************ 00:44:00.700 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:44:00.700 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:44:00.700 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:44:00.700 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:44:00.700 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:44:00.700 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:44:00.700 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:44:00.700 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:44:00.700 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:44:00.700 nvmf_trace.0 00:44:00.700 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:44:00.700 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:44:00.700 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:00.700 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:44:01.269 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:01.269 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:44:01.269 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:01.269 05:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:01.269 rmmod nvme_tcp 00:44:01.269 rmmod nvme_fabrics 00:44:01.269 rmmod nvme_keyring 00:44:01.269 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:01.269 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:44:01.269 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:44:01.269 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 103486 ']' 00:44:01.269 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 103486 00:44:01.269 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 103486 ']' 00:44:01.269 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 103486 00:44:01.269 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:44:01.269 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:01.269 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 103486 00:44:01.269 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:44:01.269 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:44:01.269 killing process with pid 103486 00:44:01.269 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 103486' 00:44:01.269 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 103486 00:44:01.269 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 103486 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:44:01.835 ************************************ 00:44:01.835 END TEST nvmf_lvs_grow 00:44:01.835 ************************************ 00:44:01.835 00:44:01.835 real 0m40.944s 00:44:01.835 user 0m43.087s 00:44:01.835 sys 0m12.829s 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:01.835 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:44:02.094 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:44:02.094 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:02.095 ************************************ 00:44:02.095 START TEST nvmf_bdev_io_wait 00:44:02.095 ************************************ 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:44:02.095 * Looking for test storage... 00:44:02.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:02.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:02.095 --rc genhtml_branch_coverage=1 00:44:02.095 --rc genhtml_function_coverage=1 00:44:02.095 --rc genhtml_legend=1 00:44:02.095 --rc geninfo_all_blocks=1 00:44:02.095 --rc geninfo_unexecuted_blocks=1 00:44:02.095 00:44:02.095 ' 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:02.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:02.095 --rc genhtml_branch_coverage=1 00:44:02.095 --rc genhtml_function_coverage=1 00:44:02.095 --rc genhtml_legend=1 00:44:02.095 --rc geninfo_all_blocks=1 00:44:02.095 --rc geninfo_unexecuted_blocks=1 00:44:02.095 00:44:02.095 ' 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:02.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:02.095 --rc genhtml_branch_coverage=1 00:44:02.095 --rc genhtml_function_coverage=1 00:44:02.095 --rc genhtml_legend=1 00:44:02.095 --rc geninfo_all_blocks=1 00:44:02.095 --rc geninfo_unexecuted_blocks=1 00:44:02.095 00:44:02.095 ' 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:02.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:02.095 --rc genhtml_branch_coverage=1 00:44:02.095 --rc genhtml_function_coverage=1 00:44:02.095 --rc genhtml_legend=1 00:44:02.095 --rc geninfo_all_blocks=1 00:44:02.095 --rc geninfo_unexecuted_blocks=1 00:44:02.095 00:44:02.095 ' 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:44:02.095 05:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:02.095 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:02.095 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:02.095 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:02.096 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:02.354 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:44:02.355 Cannot find device "nvmf_init_br" 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:44:02.355 Cannot find device "nvmf_init_br2" 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:44:02.355 Cannot find device "nvmf_tgt_br" 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:44:02.355 Cannot find device "nvmf_tgt_br2" 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:44:02.355 Cannot find device "nvmf_init_br" 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:44:02.355 Cannot find device "nvmf_init_br2" 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:44:02.355 Cannot find device "nvmf_tgt_br" 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:44:02.355 Cannot find device "nvmf_tgt_br2" 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:44:02.355 Cannot find device "nvmf_br" 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:44:02.355 Cannot find device "nvmf_init_if" 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:44:02.355 Cannot find device "nvmf_init_if2" 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:02.355 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:02.355 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:02.355 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:44:02.614 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:02.614 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:44:02.614 00:44:02.614 --- 10.0.0.3 ping statistics --- 00:44:02.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:02.614 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:44:02.614 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:44:02.614 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:44:02.614 00:44:02.614 --- 10.0.0.4 ping statistics --- 00:44:02.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:02.614 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:02.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:02.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:44:02.614 00:44:02.614 --- 10.0.0.1 ping statistics --- 00:44:02.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:02.614 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:44:02.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:02.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:44:02.614 00:44:02.614 --- 10.0.0.2 ping statistics --- 00:44:02.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:02.614 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:02.614 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:02.615 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:44:02.615 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:02.615 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:02.615 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:02.615 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=103957 00:44:02.615 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:44:02.615 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 103957 00:44:02.615 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 103957 ']' 00:44:02.615 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:02.615 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:02.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:02.615 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:02.615 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:02.615 05:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:02.873 [2024-11-20 05:55:22.569932] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:02.873 [2024-11-20 05:55:22.571364] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:44:02.873 [2024-11-20 05:55:22.571441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:02.873 [2024-11-20 05:55:22.733498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:03.132 [2024-11-20 05:55:22.852719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:03.132 [2024-11-20 05:55:22.852790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:03.132 [2024-11-20 05:55:22.852807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:03.132 [2024-11-20 05:55:22.852822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:03.132 [2024-11-20 05:55:22.852834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:03.132 [2024-11-20 05:55:22.854895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:03.132 [2024-11-20 05:55:22.855104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:03.132 [2024-11-20 05:55:22.855319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:03.132 [2024-11-20 05:55:22.855316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:03.132 [2024-11-20 05:55:22.856170] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:03.701 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:03.701 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:44:03.701 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:03.701 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:03.701 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:03.961 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:03.961 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:44:03.961 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:03.961 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:03.961 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:03.961 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:44:03.961 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:03.961 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:03.961 [2024-11-20 05:55:23.793319] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:03.961 [2024-11-20 05:55:23.793477] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:03.961 [2024-11-20 05:55:23.794541] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:03.961 [2024-11-20 05:55:23.795705] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:03.961 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:03.961 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:03.961 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:03.961 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:03.961 [2024-11-20 05:55:23.808599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:03.961 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:03.961 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:03.961 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:03.962 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:04.221 Malloc0 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:04.221 [2024-11-20 05:55:23.904963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=104010 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=104012 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=104014 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:04.221 { 00:44:04.221 "params": { 00:44:04.221 "name": "Nvme$subsystem", 00:44:04.221 "trtype": "$TEST_TRANSPORT", 00:44:04.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:04.221 "adrfam": "ipv4", 00:44:04.221 "trsvcid": "$NVMF_PORT", 00:44:04.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:04.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:04.221 "hdgst": ${hdgst:-false}, 00:44:04.221 "ddgst": ${ddgst:-false} 00:44:04.221 }, 00:44:04.221 "method": "bdev_nvme_attach_controller" 00:44:04.221 } 00:44:04.221 EOF 00:44:04.221 )") 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=104016 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:04.221 { 00:44:04.221 "params": { 00:44:04.221 "name": "Nvme$subsystem", 00:44:04.221 "trtype": "$TEST_TRANSPORT", 00:44:04.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:04.221 "adrfam": "ipv4", 00:44:04.221 "trsvcid": "$NVMF_PORT", 00:44:04.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:04.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:04.221 "hdgst": ${hdgst:-false}, 00:44:04.221 "ddgst": ${ddgst:-false} 00:44:04.221 }, 00:44:04.221 "method": "bdev_nvme_attach_controller" 00:44:04.221 } 00:44:04.221 EOF 00:44:04.221 )") 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:04.221 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:04.221 { 00:44:04.221 "params": { 00:44:04.221 "name": "Nvme$subsystem", 00:44:04.221 "trtype": "$TEST_TRANSPORT", 00:44:04.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:04.221 "adrfam": "ipv4", 00:44:04.221 "trsvcid": "$NVMF_PORT", 00:44:04.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:04.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:04.221 "hdgst": ${hdgst:-false}, 00:44:04.222 "ddgst": ${ddgst:-false} 00:44:04.222 }, 00:44:04.222 "method": "bdev_nvme_attach_controller" 00:44:04.222 } 00:44:04.222 EOF 00:44:04.222 )") 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:04.222 { 00:44:04.222 "params": { 00:44:04.222 "name": "Nvme$subsystem", 00:44:04.222 "trtype": "$TEST_TRANSPORT", 00:44:04.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:04.222 "adrfam": "ipv4", 00:44:04.222 "trsvcid": "$NVMF_PORT", 00:44:04.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:04.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:04.222 "hdgst": ${hdgst:-false}, 00:44:04.222 "ddgst": ${ddgst:-false} 00:44:04.222 }, 00:44:04.222 "method": "bdev_nvme_attach_controller" 00:44:04.222 } 00:44:04.222 EOF 00:44:04.222 )") 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:04.222 "params": { 00:44:04.222 "name": "Nvme1", 00:44:04.222 "trtype": "tcp", 00:44:04.222 "traddr": "10.0.0.3", 00:44:04.222 "adrfam": "ipv4", 00:44:04.222 "trsvcid": "4420", 00:44:04.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:04.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:04.222 "hdgst": false, 00:44:04.222 "ddgst": false 00:44:04.222 }, 00:44:04.222 "method": "bdev_nvme_attach_controller" 00:44:04.222 }' 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:04.222 "params": { 00:44:04.222 "name": "Nvme1", 00:44:04.222 "trtype": "tcp", 00:44:04.222 "traddr": "10.0.0.3", 00:44:04.222 "adrfam": "ipv4", 00:44:04.222 "trsvcid": "4420", 00:44:04.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:04.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:04.222 "hdgst": false, 00:44:04.222 "ddgst": false 00:44:04.222 }, 00:44:04.222 "method": "bdev_nvme_attach_controller" 00:44:04.222 }' 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:04.222 "params": { 00:44:04.222 "name": "Nvme1", 00:44:04.222 "trtype": "tcp", 00:44:04.222 "traddr": "10.0.0.3", 00:44:04.222 "adrfam": "ipv4", 00:44:04.222 "trsvcid": "4420", 00:44:04.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:04.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:04.222 "hdgst": false, 00:44:04.222 "ddgst": false 00:44:04.222 }, 00:44:04.222 "method": "bdev_nvme_attach_controller" 00:44:04.222 }' 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:44:04.222 05:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:04.222 "params": { 00:44:04.222 "name": "Nvme1", 00:44:04.222 "trtype": "tcp", 00:44:04.222 "traddr": "10.0.0.3", 00:44:04.222 "adrfam": "ipv4", 00:44:04.222 "trsvcid": "4420", 00:44:04.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:04.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:04.222 "hdgst": false, 00:44:04.222 "ddgst": false 00:44:04.222 }, 00:44:04.222 "method": "bdev_nvme_attach_controller" 00:44:04.222 }' 00:44:04.222 [2024-11-20 05:55:23.961187] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:44:04.222 [2024-11-20 05:55:23.961260] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:44:04.222 [2024-11-20 05:55:23.972940] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:44:04.222 [2024-11-20 05:55:23.973028] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:44:04.222 05:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 104010 00:44:04.222 [2024-11-20 05:55:24.012500] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:44:04.222 [2024-11-20 05:55:24.013702] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:44:04.222 [2024-11-20 05:55:24.017667] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:44:04.222 [2024-11-20 05:55:24.017761] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:44:04.480 [2024-11-20 05:55:24.212639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:04.480 [2024-11-20 05:55:24.267964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:44:04.480 [2024-11-20 05:55:24.280026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:04.480 [2024-11-20 05:55:24.332580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:44:04.480 [2024-11-20 05:55:24.345426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:04.738 [2024-11-20 05:55:24.409687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:44:04.738 [2024-11-20 05:55:24.449195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:04.738 Running I/O for 1 seconds... 00:44:04.738 Running I/O for 1 seconds... 00:44:04.738 [2024-11-20 05:55:24.516400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:44:04.738 Running I/O for 1 seconds... 00:44:04.996 Running I/O for 1 seconds... 00:44:05.563 7438.00 IOPS, 29.05 MiB/s [2024-11-20T05:55:25.482Z] 6026.00 IOPS, 23.54 MiB/s 00:44:05.563 Latency(us) 00:44:05.564 [2024-11-20T05:55:25.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:05.564 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:44:05.564 Nvme1n1 : 1.01 7477.91 29.21 0.00 0.00 17010.82 4025.78 20472.20 00:44:05.564 [2024-11-20T05:55:25.483Z] =================================================================================================================== 00:44:05.564 [2024-11-20T05:55:25.483Z] Total : 7477.91 29.21 0.00 0.00 17010.82 4025.78 20472.20 00:44:05.564 00:44:05.564 Latency(us) 00:44:05.564 [2024-11-20T05:55:25.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:05.564 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:44:05.564 Nvme1n1 : 1.01 6101.93 23.84 0.00 0.00 20885.05 9299.87 37449.14 00:44:05.564 [2024-11-20T05:55:25.483Z] =================================================================================================================== 00:44:05.564 [2024-11-20T05:55:25.483Z] Total : 6101.93 23.84 0.00 0.00 20885.05 9299.87 37449.14 00:44:05.822 249640.00 IOPS, 975.16 MiB/s 00:44:05.822 Latency(us) 00:44:05.822 [2024-11-20T05:55:25.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:05.822 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:44:05.822 Nvme1n1 : 1.00 249265.88 973.69 0.00 0.00 510.92 238.93 1490.16 00:44:05.822 [2024-11-20T05:55:25.741Z] =================================================================================================================== 00:44:05.822 [2024-11-20T05:55:25.741Z] Total : 249265.88 973.69 0.00 0.00 510.92 238.93 1490.16 00:44:05.822 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 104012 00:44:05.822 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 104014 00:44:05.822 8427.00 IOPS, 32.92 MiB/s 00:44:05.822 Latency(us) 00:44:05.822 [2024-11-20T05:55:25.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:05.822 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:44:05.822 Nvme1n1 : 1.01 8535.72 33.34 0.00 0.00 14953.72 3058.35 27712.37 00:44:05.822 [2024-11-20T05:55:25.741Z] =================================================================================================================== 00:44:05.822 [2024-11-20T05:55:25.741Z] Total : 8535.72 33.34 0.00 0.00 14953.72 3058.35 27712.37 00:44:05.822 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 104016 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:06.081 rmmod nvme_tcp 00:44:06.081 rmmod nvme_fabrics 00:44:06.081 rmmod nvme_keyring 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 103957 ']' 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 103957 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 103957 ']' 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 103957 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 103957 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:44:06.081 killing process with pid 103957 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 103957' 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 103957 00:44:06.081 05:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 103957 00:44:06.340 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:06.340 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:06.340 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:06.340 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:44:06.340 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:44:06.340 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:06.340 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:44:06.599 00:44:06.599 real 0m4.714s 00:44:06.599 user 0m13.534s 00:44:06.599 sys 0m2.858s 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:06.599 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:06.599 ************************************ 00:44:06.599 END TEST nvmf_bdev_io_wait 00:44:06.599 ************************************ 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:06.859 ************************************ 00:44:06.859 START TEST nvmf_queue_depth 00:44:06.859 ************************************ 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:44:06.859 * Looking for test storage... 00:44:06.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:06.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:06.859 --rc genhtml_branch_coverage=1 00:44:06.859 --rc genhtml_function_coverage=1 00:44:06.859 --rc genhtml_legend=1 00:44:06.859 --rc geninfo_all_blocks=1 00:44:06.859 --rc geninfo_unexecuted_blocks=1 00:44:06.859 00:44:06.859 ' 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:06.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:06.859 --rc genhtml_branch_coverage=1 00:44:06.859 --rc genhtml_function_coverage=1 00:44:06.859 --rc genhtml_legend=1 00:44:06.859 --rc geninfo_all_blocks=1 00:44:06.859 --rc geninfo_unexecuted_blocks=1 00:44:06.859 00:44:06.859 ' 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:06.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:06.859 --rc genhtml_branch_coverage=1 00:44:06.859 --rc genhtml_function_coverage=1 00:44:06.859 --rc genhtml_legend=1 00:44:06.859 --rc geninfo_all_blocks=1 00:44:06.859 --rc geninfo_unexecuted_blocks=1 00:44:06.859 00:44:06.859 ' 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:06.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:06.859 --rc genhtml_branch_coverage=1 00:44:06.859 --rc genhtml_function_coverage=1 00:44:06.859 --rc genhtml_legend=1 00:44:06.859 --rc geninfo_all_blocks=1 00:44:06.859 --rc geninfo_unexecuted_blocks=1 00:44:06.859 00:44:06.859 ' 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:06.859 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:07.119 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:44:07.120 Cannot find device "nvmf_init_br" 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:44:07.120 Cannot find device "nvmf_init_br2" 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:44:07.120 Cannot find device "nvmf_tgt_br" 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:44:07.120 Cannot find device "nvmf_tgt_br2" 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:44:07.120 Cannot find device "nvmf_init_br" 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:44:07.120 Cannot find device "nvmf_init_br2" 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:44:07.120 Cannot find device "nvmf_tgt_br" 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:44:07.120 Cannot find device "nvmf_tgt_br2" 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:44:07.120 Cannot find device "nvmf_br" 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:44:07.120 Cannot find device "nvmf_init_if" 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:44:07.120 Cannot find device "nvmf_init_if2" 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:07.120 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:07.120 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:07.120 05:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:07.120 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:07.120 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:07.120 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:44:07.380 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:07.380 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:44:07.380 00:44:07.380 --- 10.0.0.3 ping statistics --- 00:44:07.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:07.380 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:44:07.380 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:44:07.380 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:44:07.380 00:44:07.380 --- 10.0.0.4 ping statistics --- 00:44:07.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:07.380 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:07.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:07.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:44:07.380 00:44:07.380 --- 10.0.0.1 ping statistics --- 00:44:07.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:07.380 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:44:07.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:07.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:44:07.380 00:44:07.380 --- 10.0.0.2 ping statistics --- 00:44:07.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:07.380 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=104309 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 104309 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 104309 ']' 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:07.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:07.380 05:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:07.380 [2024-11-20 05:55:27.283321] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:07.380 [2024-11-20 05:55:27.284692] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:44:07.380 [2024-11-20 05:55:27.284762] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:07.639 [2024-11-20 05:55:27.434587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:07.639 [2024-11-20 05:55:27.475210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:07.639 [2024-11-20 05:55:27.475278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:07.639 [2024-11-20 05:55:27.475288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:07.639 [2024-11-20 05:55:27.475297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:07.639 [2024-11-20 05:55:27.475303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:07.639 [2024-11-20 05:55:27.475584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:07.639 [2024-11-20 05:55:27.546091] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:07.639 [2024-11-20 05:55:27.546357] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:08.578 [2024-11-20 05:55:28.256232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:08.578 Malloc0 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:08.578 [2024-11-20 05:55:28.328326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=104359 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 104359 /var/tmp/bdevperf.sock 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 104359 ']' 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:08.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:08.578 05:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:08.578 [2024-11-20 05:55:28.377336] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:44:08.578 [2024-11-20 05:55:28.377402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104359 ] 00:44:08.837 [2024-11-20 05:55:28.528493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:08.837 [2024-11-20 05:55:28.629184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:09.767 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:09.767 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:44:09.767 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:44:09.767 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:09.767 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:09.767 NVMe0n1 00:44:09.767 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:09.767 05:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:09.767 Running I/O for 10 seconds... 00:44:12.072 10524.00 IOPS, 41.11 MiB/s [2024-11-20T05:55:32.924Z] 11283.00 IOPS, 44.07 MiB/s [2024-11-20T05:55:33.858Z] 11265.00 IOPS, 44.00 MiB/s [2024-11-20T05:55:34.793Z] 11417.50 IOPS, 44.60 MiB/s [2024-11-20T05:55:35.727Z] 11412.20 IOPS, 44.58 MiB/s [2024-11-20T05:55:36.660Z] 11621.50 IOPS, 45.40 MiB/s [2024-11-20T05:55:38.035Z] 11740.71 IOPS, 45.86 MiB/s [2024-11-20T05:55:38.970Z] 11856.12 IOPS, 46.31 MiB/s [2024-11-20T05:55:39.905Z] 11926.56 IOPS, 46.59 MiB/s [2024-11-20T05:55:39.905Z] 11990.80 IOPS, 46.84 MiB/s 00:44:19.986 Latency(us) 00:44:19.986 [2024-11-20T05:55:39.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:19.986 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:44:19.986 Verification LBA range: start 0x0 length 0x4000 00:44:19.986 NVMe0n1 : 10.06 12020.73 46.96 0.00 0.00 84910.67 18974.23 98865.74 00:44:19.986 [2024-11-20T05:55:39.905Z] =================================================================================================================== 00:44:19.986 [2024-11-20T05:55:39.905Z] Total : 12020.73 46.96 0.00 0.00 84910.67 18974.23 98865.74 00:44:19.986 { 00:44:19.986 "results": [ 00:44:19.986 { 00:44:19.986 "job": "NVMe0n1", 00:44:19.986 "core_mask": "0x1", 00:44:19.986 "workload": "verify", 00:44:19.986 "status": "finished", 00:44:19.986 "verify_range": { 00:44:19.986 "start": 0, 00:44:19.986 "length": 16384 00:44:19.986 }, 00:44:19.986 "queue_depth": 1024, 00:44:19.986 "io_size": 4096, 00:44:19.986 "runtime": 10.059288, 00:44:19.986 "iops": 12020.731487158931, 00:44:19.986 "mibps": 46.955982371714576, 00:44:19.986 "io_failed": 0, 00:44:19.986 "io_timeout": 0, 00:44:19.986 "avg_latency_us": 84910.67062716003, 00:44:19.986 "min_latency_us": 18974.23238095238, 00:44:19.986 "max_latency_us": 98865.73714285715 00:44:19.986 } 00:44:19.986 ], 00:44:19.986 "core_count": 1 00:44:19.986 } 00:44:19.986 05:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 104359 00:44:19.986 05:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 104359 ']' 00:44:19.986 05:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 104359 00:44:19.986 05:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:44:19.986 05:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:19.986 05:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 104359 00:44:19.986 05:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:44:19.986 05:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:44:19.986 killing process with pid 104359 00:44:19.986 05:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 104359' 00:44:19.986 05:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 104359 00:44:19.986 Received shutdown signal, test time was about 10.000000 seconds 00:44:19.986 00:44:19.986 Latency(us) 00:44:19.986 [2024-11-20T05:55:39.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:19.986 [2024-11-20T05:55:39.905Z] =================================================================================================================== 00:44:19.986 [2024-11-20T05:55:39.905Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:19.986 05:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 104359 00:44:20.245 05:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:44:20.245 05:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:44:20.245 05:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:20.245 05:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:20.245 rmmod nvme_tcp 00:44:20.245 rmmod nvme_fabrics 00:44:20.245 rmmod nvme_keyring 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 104309 ']' 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 104309 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 104309 ']' 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 104309 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 104309 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:44:20.245 killing process with pid 104309 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 104309' 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 104309 00:44:20.245 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 104309 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:44:20.504 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:44:20.505 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:44:20.505 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:44:20.764 00:44:20.764 real 0m14.004s 00:44:20.764 user 0m21.749s 00:44:20.764 sys 0m3.378s 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:20.764 ************************************ 00:44:20.764 END TEST nvmf_queue_depth 00:44:20.764 ************************************ 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:20.764 ************************************ 00:44:20.764 START TEST nvmf_target_multipath 00:44:20.764 ************************************ 00:44:20.764 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:44:21.027 * Looking for test storage... 00:44:21.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:21.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:21.027 --rc genhtml_branch_coverage=1 00:44:21.027 --rc genhtml_function_coverage=1 00:44:21.027 --rc genhtml_legend=1 00:44:21.027 --rc geninfo_all_blocks=1 00:44:21.027 --rc geninfo_unexecuted_blocks=1 00:44:21.027 00:44:21.027 ' 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:21.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:21.027 --rc genhtml_branch_coverage=1 00:44:21.027 --rc genhtml_function_coverage=1 00:44:21.027 --rc genhtml_legend=1 00:44:21.027 --rc geninfo_all_blocks=1 00:44:21.027 --rc geninfo_unexecuted_blocks=1 00:44:21.027 00:44:21.027 ' 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:21.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:21.027 --rc genhtml_branch_coverage=1 00:44:21.027 --rc genhtml_function_coverage=1 00:44:21.027 --rc genhtml_legend=1 00:44:21.027 --rc geninfo_all_blocks=1 00:44:21.027 --rc geninfo_unexecuted_blocks=1 00:44:21.027 00:44:21.027 ' 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:21.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:21.027 --rc genhtml_branch_coverage=1 00:44:21.027 --rc genhtml_function_coverage=1 00:44:21.027 --rc genhtml_legend=1 00:44:21.027 --rc geninfo_all_blocks=1 00:44:21.027 --rc geninfo_unexecuted_blocks=1 00:44:21.027 00:44:21.027 ' 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:44:21.027 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:44:21.028 Cannot find device "nvmf_init_br" 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:44:21.028 Cannot find device "nvmf_init_br2" 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:44:21.028 Cannot find device "nvmf_tgt_br" 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:44:21.028 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:44:21.306 Cannot find device "nvmf_tgt_br2" 00:44:21.306 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:44:21.306 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:44:21.306 Cannot find device "nvmf_init_br" 00:44:21.306 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:44:21.306 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:44:21.306 Cannot find device "nvmf_init_br2" 00:44:21.306 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:44:21.306 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:44:21.306 Cannot find device "nvmf_tgt_br" 00:44:21.306 05:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:44:21.306 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:44:21.306 Cannot find device "nvmf_tgt_br2" 00:44:21.306 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:44:21.306 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:44:21.306 Cannot find device "nvmf_br" 00:44:21.306 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:44:21.306 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:44:21.306 Cannot find device "nvmf_init_if" 00:44:21.306 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:44:21.306 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:44:21.306 Cannot find device "nvmf_init_if2" 00:44:21.306 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:44:21.306 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:21.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:21.306 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:44:21.306 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:21.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:21.306 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:44:21.306 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:21.307 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:21.584 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:44:21.584 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:44:21.584 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:44:21.584 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:44:21.584 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:21.584 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:44:21.585 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:21.585 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:44:21.585 00:44:21.585 --- 10.0.0.3 ping statistics --- 00:44:21.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:21.585 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:44:21.585 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:44:21.585 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:44:21.585 00:44:21.585 --- 10.0.0.4 ping statistics --- 00:44:21.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:21.585 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:21.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:21.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:44:21.585 00:44:21.585 --- 10.0.0.1 ping statistics --- 00:44:21.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:21.585 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:44:21.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:21.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:44:21.585 00:44:21.585 --- 10.0.0.2 ping statistics --- 00:44:21.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:21.585 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=104746 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 104746 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 104746 ']' 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:21.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:21.585 05:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:44:21.585 [2024-11-20 05:55:41.440847] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:21.585 [2024-11-20 05:55:41.442498] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:44:21.585 [2024-11-20 05:55:41.442581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:21.844 [2024-11-20 05:55:41.605263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:21.844 [2024-11-20 05:55:41.708671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:21.844 [2024-11-20 05:55:41.708757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:21.844 [2024-11-20 05:55:41.708784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:21.844 [2024-11-20 05:55:41.708807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:21.844 [2024-11-20 05:55:41.708824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:21.844 [2024-11-20 05:55:41.711162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:21.844 [2024-11-20 05:55:41.711257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:21.844 [2024-11-20 05:55:41.711469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:21.844 [2024-11-20 05:55:41.711482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:22.104 [2024-11-20 05:55:41.911522] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:22.104 [2024-11-20 05:55:41.912185] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:22.104 [2024-11-20 05:55:41.912753] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:22.104 [2024-11-20 05:55:41.912831] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:22.104 [2024-11-20 05:55:41.913050] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:22.673 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:22.673 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:44:22.673 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:22.673 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:22.673 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:44:22.673 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:22.673 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:44:22.932 [2024-11-20 05:55:42.785299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:22.932 05:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:44:23.501 Malloc0 00:44:23.501 05:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:44:23.501 05:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:24.068 05:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:44:24.068 [2024-11-20 05:55:43.849298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:24.068 05:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:44:24.326 [2024-11-20 05:55:44.041094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:44:24.326 05:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:44:24.327 05:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:44:24.585 05:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:44:24.586 05:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:44:24.586 05:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:44:24.586 05:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:44:24.586 05:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:44:26.487 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:44:26.487 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=104885 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:44:26.488 05:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:44:26.488 [global] 00:44:26.488 thread=1 00:44:26.488 invalidate=1 00:44:26.488 rw=randrw 00:44:26.488 time_based=1 00:44:26.488 runtime=6 00:44:26.488 ioengine=libaio 00:44:26.488 direct=1 00:44:26.488 bs=4096 00:44:26.488 iodepth=128 00:44:26.488 norandommap=0 00:44:26.488 numjobs=1 00:44:26.488 00:44:26.488 verify_dump=1 00:44:26.488 verify_backlog=512 00:44:26.488 verify_state_save=0 00:44:26.488 do_verify=1 00:44:26.488 verify=crc32c-intel 00:44:26.488 [job0] 00:44:26.488 filename=/dev/nvme0n1 00:44:26.488 Could not set queue depth (nvme0n1) 00:44:26.746 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:26.746 fio-3.35 00:44:26.746 Starting 1 thread 00:44:27.681 05:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:44:27.940 05:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:44:27.940 05:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:44:27.940 05:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:44:27.940 05:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:44:27.940 05:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:44:27.940 05:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:44:27.940 05:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:44:27.940 05:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:44:27.940 05:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:44:27.940 05:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:44:27.940 05:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:44:27.940 05:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:44:27.940 05:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:44:27.940 05:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:44:29.315 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:44:29.315 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:44:29.315 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:44:29.315 05:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:44:29.315 05:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:44:29.573 05:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:44:29.573 05:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:44:29.573 05:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:44:29.573 05:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:44:29.573 05:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:44:29.573 05:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:44:29.573 05:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:44:29.573 05:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:44:29.573 05:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:44:29.573 05:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:44:29.573 05:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:44:29.573 05:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:44:29.573 05:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:44:30.508 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:44:30.508 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:44:30.508 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:44:30.508 05:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 104885 00:44:33.041 00:44:33.041 job0: (groupid=0, jobs=1): err= 0: pid=104906: Wed Nov 20 05:55:52 2024 00:44:33.041 read: IOPS=14.5k, BW=56.5MiB/s (59.2MB/s)(339MiB/6003msec) 00:44:33.041 slat (usec): min=3, max=4316, avg=38.89, stdev=182.15 00:44:33.041 clat (usec): min=1101, max=14282, avg=5938.01, stdev=964.08 00:44:33.041 lat (usec): min=1142, max=14297, avg=5976.90, stdev=972.97 00:44:33.041 clat percentiles (usec): 00:44:33.041 | 1.00th=[ 3654], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5342], 00:44:33.041 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5866], 60.00th=[ 5997], 00:44:33.041 | 70.00th=[ 6194], 80.00th=[ 6456], 90.00th=[ 6980], 95.00th=[ 7635], 00:44:33.041 | 99.00th=[ 8979], 99.50th=[ 9503], 99.90th=[11994], 99.95th=[12649], 00:44:33.041 | 99.99th=[13435] 00:44:33.041 bw ( KiB/s): min=11472, max=37976, per=50.72%, avg=29325.09, stdev=9720.78, samples=11 00:44:33.041 iops : min= 2868, max= 9494, avg=7331.27, stdev=2430.19, samples=11 00:44:33.041 write: IOPS=8784, BW=34.3MiB/s (36.0MB/s)(177MiB/5170msec); 0 zone resets 00:44:33.041 slat (usec): min=4, max=4592, avg=48.62, stdev=100.16 00:44:33.041 clat (usec): min=662, max=13853, avg=5407.45, stdev=815.04 00:44:33.041 lat (usec): min=739, max=13875, avg=5456.08, stdev=817.17 00:44:33.041 clat percentiles (usec): 00:44:33.041 | 1.00th=[ 3064], 5.00th=[ 4113], 10.00th=[ 4752], 20.00th=[ 5014], 00:44:33.041 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5407], 60.00th=[ 5538], 00:44:33.041 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5997], 95.00th=[ 6194], 00:44:33.041 | 99.00th=[ 8455], 99.50th=[ 9372], 99.90th=[11994], 99.95th=[12387], 00:44:33.041 | 99.99th=[13566] 00:44:33.041 bw ( KiB/s): min=12000, max=37448, per=83.59%, avg=29370.91, stdev=9421.36, samples=11 00:44:33.041 iops : min= 3000, max= 9362, avg=7342.73, stdev=2355.34, samples=11 00:44:33.041 lat (usec) : 750=0.01%, 1000=0.01% 00:44:33.041 lat (msec) : 2=0.03%, 4=2.95%, 10=96.70%, 20=0.31% 00:44:33.041 cpu : usr=5.65%, sys=25.78%, ctx=10447, majf=0, minf=127 00:44:33.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:44:33.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:33.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:33.041 issued rwts: total=86778,45415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:33.041 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:33.041 00:44:33.041 Run status group 0 (all jobs): 00:44:33.041 READ: bw=56.5MiB/s (59.2MB/s), 56.5MiB/s-56.5MiB/s (59.2MB/s-59.2MB/s), io=339MiB (355MB), run=6003-6003msec 00:44:33.041 WRITE: bw=34.3MiB/s (36.0MB/s), 34.3MiB/s-34.3MiB/s (36.0MB/s-36.0MB/s), io=177MiB (186MB), run=5170-5170msec 00:44:33.041 00:44:33.041 Disk stats (read/write): 00:44:33.041 nvme0n1: ios=85887/44209, merge=0/0, ticks=475934/226010, in_queue=701944, util=98.58% 00:44:33.041 05:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:44:33.041 05:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:44:33.300 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:44:33.300 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:44:33.300 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:44:33.300 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:44:33.300 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:44:33.300 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:44:33.301 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:44:33.301 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:44:33.301 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:44:33.301 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:44:33.301 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:44:33.301 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:44:33.301 05:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:44:34.236 05:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:44:34.236 05:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:44:34.236 05:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:44:34.236 05:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:44:34.236 05:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=105036 00:44:34.236 05:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:44:34.236 05:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:44:34.236 [global] 00:44:34.236 thread=1 00:44:34.236 invalidate=1 00:44:34.236 rw=randrw 00:44:34.236 time_based=1 00:44:34.236 runtime=6 00:44:34.236 ioengine=libaio 00:44:34.236 direct=1 00:44:34.236 bs=4096 00:44:34.236 iodepth=128 00:44:34.236 norandommap=0 00:44:34.236 numjobs=1 00:44:34.236 00:44:34.494 verify_dump=1 00:44:34.494 verify_backlog=512 00:44:34.494 verify_state_save=0 00:44:34.494 do_verify=1 00:44:34.494 verify=crc32c-intel 00:44:34.494 [job0] 00:44:34.494 filename=/dev/nvme0n1 00:44:34.494 Could not set queue depth (nvme0n1) 00:44:34.494 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:44:34.494 fio-3.35 00:44:34.494 Starting 1 thread 00:44:35.429 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:44:35.688 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:44:35.947 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:44:35.947 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:44:35.947 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:44:35.947 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:44:35.947 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:44:35.947 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:44:35.947 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:44:35.947 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:44:35.947 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:44:35.948 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:44:35.948 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:44:35.948 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:44:35.948 05:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:44:36.884 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:44:36.884 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:44:36.884 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:44:36.884 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:44:37.143 05:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:44:37.402 05:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:44:37.402 05:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:44:37.402 05:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:44:37.402 05:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:44:37.402 05:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:44:37.402 05:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:44:37.402 05:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:44:37.402 05:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:44:37.402 05:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:44:37.402 05:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:44:37.402 05:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:44:37.402 05:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:44:37.402 05:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:44:38.337 05:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:44:38.337 05:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:44:38.337 05:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:44:38.337 05:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 105036 00:44:40.871 00:44:40.871 job0: (groupid=0, jobs=1): err= 0: pid=105057: Wed Nov 20 05:56:00 2024 00:44:40.871 read: IOPS=14.0k, BW=54.8MiB/s (57.4MB/s)(329MiB/6001msec) 00:44:40.871 slat (usec): min=5, max=3984, avg=35.31, stdev=170.01 00:44:40.871 clat (usec): min=352, max=21283, avg=6135.47, stdev=2359.83 00:44:40.871 lat (usec): min=366, max=21296, avg=6170.78, stdev=2360.78 00:44:40.871 clat percentiles (usec): 00:44:40.871 | 1.00th=[ 1156], 5.00th=[ 2638], 10.00th=[ 4359], 20.00th=[ 5145], 00:44:40.871 | 30.00th=[ 5473], 40.00th=[ 5669], 50.00th=[ 5866], 60.00th=[ 6063], 00:44:40.871 | 70.00th=[ 6259], 80.00th=[ 6652], 90.00th=[ 8029], 95.00th=[11731], 00:44:40.871 | 99.00th=[15008], 99.50th=[15926], 99.90th=[17957], 99.95th=[18744], 00:44:40.871 | 99.99th=[20579] 00:44:40.871 bw ( KiB/s): min=14440, max=36920, per=51.03%, avg=28619.91, stdev=6855.55, samples=11 00:44:40.871 iops : min= 3610, max= 9230, avg=7154.91, stdev=1713.86, samples=11 00:44:40.871 write: IOPS=8239, BW=32.2MiB/s (33.7MB/s)(171MiB/5305msec); 0 zone resets 00:44:40.871 slat (usec): min=10, max=2498, avg=45.32, stdev=100.02 00:44:40.871 clat (usec): min=222, max=19537, avg=5608.50, stdev=2281.00 00:44:40.871 lat (usec): min=255, max=19566, avg=5653.82, stdev=2279.72 00:44:40.871 clat percentiles (usec): 00:44:40.871 | 1.00th=[ 906], 5.00th=[ 1680], 10.00th=[ 3589], 20.00th=[ 4883], 00:44:40.871 | 30.00th=[ 5145], 40.00th=[ 5276], 50.00th=[ 5407], 60.00th=[ 5538], 00:44:40.871 | 70.00th=[ 5669], 80.00th=[ 5866], 90.00th=[ 6652], 95.00th=[11863], 00:44:40.871 | 99.00th=[13566], 99.50th=[14091], 99.90th=[15926], 99.95th=[16450], 00:44:40.871 | 99.99th=[17171] 00:44:40.871 bw ( KiB/s): min=14928, max=36344, per=86.89%, avg=28636.18, stdev=6605.41, samples=11 00:44:40.871 iops : min= 3732, max= 9086, avg=7159.00, stdev=1651.32, samples=11 00:44:40.871 lat (usec) : 250=0.01%, 500=0.08%, 750=0.26%, 1000=0.60% 00:44:40.871 lat (msec) : 2=3.58%, 4=5.15%, 10=83.73%, 20=6.59%, 50=0.01% 00:44:40.871 cpu : usr=5.01%, sys=23.20%, ctx=11289, majf=0, minf=127 00:44:40.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:44:40.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:40.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:40.871 issued rwts: total=84133,43710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:40.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:40.871 00:44:40.871 Run status group 0 (all jobs): 00:44:40.871 READ: bw=54.8MiB/s (57.4MB/s), 54.8MiB/s-54.8MiB/s (57.4MB/s-57.4MB/s), io=329MiB (345MB), run=6001-6001msec 00:44:40.871 WRITE: bw=32.2MiB/s (33.7MB/s), 32.2MiB/s-32.2MiB/s (33.7MB/s-33.7MB/s), io=171MiB (179MB), run=5305-5305msec 00:44:40.871 00:44:40.871 Disk stats (read/write): 00:44:40.871 nvme0n1: ios=83126/42534, merge=0/0, ticks=482039/229162, in_queue=711201, util=98.66% 00:44:40.871 05:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:44:40.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:44:40.871 05:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:44:40.871 05:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:44:40.871 05:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:44:40.871 05:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:40.871 05:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:44:40.871 05:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:44:40.871 05:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:44:40.871 05:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:41.129 05:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:44:41.129 05:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:44:41.129 05:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:44:41.129 05:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:44:41.129 05:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:41.129 05:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:44:41.129 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:41.129 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:44:41.129 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:41.129 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:41.129 rmmod nvme_tcp 00:44:41.129 rmmod nvme_fabrics 00:44:41.129 rmmod nvme_keyring 00:44:41.388 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:41.388 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:44:41.388 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:44:41.388 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 104746 ']' 00:44:41.388 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 104746 00:44:41.388 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 104746 ']' 00:44:41.388 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 104746 00:44:41.388 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:44:41.388 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:44:41.388 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 104746 00:44:41.388 killing process with pid 104746 00:44:41.388 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:44:41.388 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:44:41.388 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 104746' 00:44:41.388 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 104746 00:44:41.388 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 104746 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:44:41.648 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:44:41.908 00:44:41.908 real 0m21.061s 00:44:41.908 user 1m9.862s 00:44:41.908 sys 0m9.491s 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:44:41.908 ************************************ 00:44:41.908 END TEST nvmf_target_multipath 00:44:41.908 ************************************ 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:41.908 ************************************ 00:44:41.908 START TEST nvmf_zcopy 00:44:41.908 ************************************ 00:44:41.908 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:44:42.169 * Looking for test storage... 00:44:42.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:44:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:42.170 --rc genhtml_branch_coverage=1 00:44:42.170 --rc genhtml_function_coverage=1 00:44:42.170 --rc genhtml_legend=1 00:44:42.170 --rc geninfo_all_blocks=1 00:44:42.170 --rc geninfo_unexecuted_blocks=1 00:44:42.170 00:44:42.170 ' 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:44:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:42.170 --rc genhtml_branch_coverage=1 00:44:42.170 --rc genhtml_function_coverage=1 00:44:42.170 --rc genhtml_legend=1 00:44:42.170 --rc geninfo_all_blocks=1 00:44:42.170 --rc geninfo_unexecuted_blocks=1 00:44:42.170 00:44:42.170 ' 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:44:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:42.170 --rc genhtml_branch_coverage=1 00:44:42.170 --rc genhtml_function_coverage=1 00:44:42.170 --rc genhtml_legend=1 00:44:42.170 --rc geninfo_all_blocks=1 00:44:42.170 --rc geninfo_unexecuted_blocks=1 00:44:42.170 00:44:42.170 ' 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:44:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:42.170 --rc genhtml_branch_coverage=1 00:44:42.170 --rc genhtml_function_coverage=1 00:44:42.170 --rc genhtml_legend=1 00:44:42.170 --rc geninfo_all_blocks=1 00:44:42.170 --rc geninfo_unexecuted_blocks=1 00:44:42.170 00:44:42.170 ' 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:42.170 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:42.171 05:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:44:42.171 Cannot find device "nvmf_init_br" 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:44:42.171 Cannot find device "nvmf_init_br2" 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:44:42.171 Cannot find device "nvmf_tgt_br" 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:44:42.171 Cannot find device "nvmf_tgt_br2" 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:44:42.171 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:44:42.432 Cannot find device "nvmf_init_br" 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:44:42.432 Cannot find device "nvmf_init_br2" 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:44:42.432 Cannot find device "nvmf_tgt_br" 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:44:42.432 Cannot find device "nvmf_tgt_br2" 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:44:42.432 Cannot find device "nvmf_br" 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:44:42.432 Cannot find device "nvmf_init_if" 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:44:42.432 Cannot find device "nvmf_init_if2" 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:42.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:42.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:44:42.432 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:44:42.692 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:42.692 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:44:42.692 00:44:42.692 --- 10.0.0.3 ping statistics --- 00:44:42.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:42.692 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:44:42.692 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:44:42.692 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:44:42.692 00:44:42.692 --- 10.0.0.4 ping statistics --- 00:44:42.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:42.692 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:42.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:42.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:44:42.692 00:44:42.692 --- 10.0.0.1 ping statistics --- 00:44:42.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:42.692 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:44:42.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:42.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:44:42.692 00:44:42.692 --- 10.0.0.2 ping statistics --- 00:44:42.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:42.692 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:44:42.692 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:42.693 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:42.693 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:42.693 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=105390 00:44:42.693 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:44:42.693 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 105390 00:44:42.693 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 105390 ']' 00:44:42.693 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:42.693 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:44:42.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:42.693 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:42.693 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:44:42.693 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:42.693 [2024-11-20 05:56:02.594141] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:42.693 [2024-11-20 05:56:02.595548] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:44:42.693 [2024-11-20 05:56:02.595611] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:42.953 [2024-11-20 05:56:02.755918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:42.953 [2024-11-20 05:56:02.823247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:42.953 [2024-11-20 05:56:02.823323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:42.953 [2024-11-20 05:56:02.823339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:42.953 [2024-11-20 05:56:02.823353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:42.953 [2024-11-20 05:56:02.823364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:42.953 [2024-11-20 05:56:02.823775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:43.213 [2024-11-20 05:56:02.941997] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:43.213 [2024-11-20 05:56:02.942432] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:43.213 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:44:43.213 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:44:43.213 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:43.213 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:43.213 05:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:43.213 [2024-11-20 05:56:03.048850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:43.213 [2024-11-20 05:56:03.069227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:43.213 malloc0 00:44:43.213 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.214 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:44:43.214 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.214 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:43.214 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.214 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:44:43.214 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:44:43.214 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:44:43.214 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:44:43.214 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:43.214 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:43.214 { 00:44:43.214 "params": { 00:44:43.214 "name": "Nvme$subsystem", 00:44:43.214 "trtype": "$TEST_TRANSPORT", 00:44:43.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:43.214 "adrfam": "ipv4", 00:44:43.214 "trsvcid": "$NVMF_PORT", 00:44:43.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:43.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:43.214 "hdgst": ${hdgst:-false}, 00:44:43.214 "ddgst": ${ddgst:-false} 00:44:43.214 }, 00:44:43.214 "method": "bdev_nvme_attach_controller" 00:44:43.214 } 00:44:43.214 EOF 00:44:43.214 )") 00:44:43.214 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:44:43.474 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:44:43.474 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:44:43.474 05:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:43.474 "params": { 00:44:43.474 "name": "Nvme1", 00:44:43.474 "trtype": "tcp", 00:44:43.474 "traddr": "10.0.0.3", 00:44:43.474 "adrfam": "ipv4", 00:44:43.474 "trsvcid": "4420", 00:44:43.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:43.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:43.474 "hdgst": false, 00:44:43.474 "ddgst": false 00:44:43.474 }, 00:44:43.474 "method": "bdev_nvme_attach_controller" 00:44:43.474 }' 00:44:43.474 [2024-11-20 05:56:03.186321] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:44:43.474 [2024-11-20 05:56:03.186431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105429 ] 00:44:43.474 [2024-11-20 05:56:03.344709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:43.734 [2024-11-20 05:56:03.437556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:43.993 Running I/O for 10 seconds... 00:44:45.867 8174.00 IOPS, 63.86 MiB/s [2024-11-20T05:56:06.725Z] 8233.00 IOPS, 64.32 MiB/s [2024-11-20T05:56:08.100Z] 8211.00 IOPS, 64.15 MiB/s [2024-11-20T05:56:08.666Z] 8215.25 IOPS, 64.18 MiB/s [2024-11-20T05:56:10.142Z] 8230.20 IOPS, 64.30 MiB/s [2024-11-20T05:56:10.708Z] 8236.50 IOPS, 64.35 MiB/s [2024-11-20T05:56:12.086Z] 8240.00 IOPS, 64.38 MiB/s [2024-11-20T05:56:13.024Z] 8245.88 IOPS, 64.42 MiB/s [2024-11-20T05:56:13.963Z] 8251.00 IOPS, 64.46 MiB/s [2024-11-20T05:56:13.963Z] 8256.30 IOPS, 64.50 MiB/s 00:44:54.044 Latency(us) 00:44:54.044 [2024-11-20T05:56:13.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:54.044 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:44:54.044 Verification LBA range: start 0x0 length 0x1000 00:44:54.044 Nvme1n1 : 10.01 8258.76 64.52 0.00 0.00 15455.93 2278.16 20971.52 00:44:54.044 [2024-11-20T05:56:13.963Z] =================================================================================================================== 00:44:54.044 [2024-11-20T05:56:13.963Z] Total : 8258.76 64.52 0.00 0.00 15455.93 2278.16 20971.52 00:44:54.044 05:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=105541 00:44:54.045 05:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:44:54.045 05:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:44:54.045 05:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:44:54.045 05:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:44:54.045 05:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:44:54.045 05:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:44:54.045 05:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:54.045 05:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:54.045 { 00:44:54.045 "params": { 00:44:54.045 "name": "Nvme$subsystem", 00:44:54.045 "trtype": "$TEST_TRANSPORT", 00:44:54.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:54.045 "adrfam": "ipv4", 00:44:54.045 "trsvcid": "$NVMF_PORT", 00:44:54.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:54.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:54.045 "hdgst": ${hdgst:-false}, 00:44:54.045 "ddgst": ${ddgst:-false} 00:44:54.045 }, 00:44:54.045 "method": "bdev_nvme_attach_controller" 00:44:54.045 } 00:44:54.045 EOF 00:44:54.045 )") 00:44:54.045 05:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:44:54.045 [2024-11-20 05:56:13.960427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.045 [2024-11-20 05:56:13.960639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.305 05:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:44:54.305 2024/11/20 05:56:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.305 05:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:44:54.305 05:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:54.305 "params": { 00:44:54.305 "name": "Nvme1", 00:44:54.305 "trtype": "tcp", 00:44:54.305 "traddr": "10.0.0.3", 00:44:54.305 "adrfam": "ipv4", 00:44:54.305 "trsvcid": "4420", 00:44:54.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:54.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:54.305 "hdgst": false, 00:44:54.305 "ddgst": false 00:44:54.305 }, 00:44:54.305 "method": "bdev_nvme_attach_controller" 00:44:54.305 }' 00:44:54.305 [2024-11-20 05:56:13.972411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.305 [2024-11-20 05:56:13.972435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.305 2024/11/20 05:56:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.305 [2024-11-20 05:56:13.984396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.305 [2024-11-20 05:56:13.984414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.305 2024/11/20 05:56:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.305 [2024-11-20 05:56:13.996396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.305 [2024-11-20 05:56:13.996416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.305 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.305 [2024-11-20 05:56:14.008395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.305 [2024-11-20 05:56:14.008410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.305 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.305 [2024-11-20 05:56:14.015920] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:44:54.305 [2024-11-20 05:56:14.016021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105541 ] 00:44:54.305 [2024-11-20 05:56:14.020396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.305 [2024-11-20 05:56:14.020418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.305 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.305 [2024-11-20 05:56:14.032394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.305 [2024-11-20 05:56:14.032414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.305 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.305 [2024-11-20 05:56:14.044391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.305 [2024-11-20 05:56:14.044412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.305 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.305 [2024-11-20 05:56:14.056393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.305 [2024-11-20 05:56:14.056409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.305 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.305 [2024-11-20 05:56:14.068393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.305 [2024-11-20 05:56:14.068411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.305 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.305 [2024-11-20 05:56:14.080392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.305 [2024-11-20 05:56:14.080411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.305 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.305 [2024-11-20 05:56:14.092393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.305 [2024-11-20 05:56:14.092411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.305 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.305 [2024-11-20 05:56:14.104394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.305 [2024-11-20 05:56:14.104414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.305 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.305 [2024-11-20 05:56:14.116391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.305 [2024-11-20 05:56:14.116406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.305 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.305 [2024-11-20 05:56:14.128391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.305 [2024-11-20 05:56:14.128411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.306 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.306 [2024-11-20 05:56:14.140391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.306 [2024-11-20 05:56:14.140409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.306 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.306 [2024-11-20 05:56:14.152391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.306 [2024-11-20 05:56:14.152410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.306 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.306 [2024-11-20 05:56:14.163690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:54.306 [2024-11-20 05:56:14.164389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.306 [2024-11-20 05:56:14.164407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.306 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.306 [2024-11-20 05:56:14.176392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.306 [2024-11-20 05:56:14.176410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.306 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.306 [2024-11-20 05:56:14.188391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.306 [2024-11-20 05:56:14.188409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.306 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.306 [2024-11-20 05:56:14.200391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.306 [2024-11-20 05:56:14.200411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.306 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.306 [2024-11-20 05:56:14.212390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.306 [2024-11-20 05:56:14.212410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.306 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.566 [2024-11-20 05:56:14.224392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.566 [2024-11-20 05:56:14.224408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.566 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.566 [2024-11-20 05:56:14.231431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:54.566 [2024-11-20 05:56:14.236391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.566 [2024-11-20 05:56:14.236408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.566 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.566 [2024-11-20 05:56:14.248391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.566 [2024-11-20 05:56:14.248409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.566 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.566 [2024-11-20 05:56:14.260390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.566 [2024-11-20 05:56:14.260407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.566 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.566 [2024-11-20 05:56:14.272391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.566 [2024-11-20 05:56:14.272409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.566 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.566 [2024-11-20 05:56:14.284390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.566 [2024-11-20 05:56:14.284408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.566 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.566 [2024-11-20 05:56:14.296390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.566 [2024-11-20 05:56:14.296407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.566 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.566 [2024-11-20 05:56:14.308391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.566 [2024-11-20 05:56:14.308407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.566 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.566 [2024-11-20 05:56:14.320390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.566 [2024-11-20 05:56:14.320406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.566 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.566 [2024-11-20 05:56:14.332391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.566 [2024-11-20 05:56:14.332410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.566 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.566 [2024-11-20 05:56:14.344395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.566 [2024-11-20 05:56:14.344414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.566 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.566 [2024-11-20 05:56:14.356393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.566 [2024-11-20 05:56:14.356412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.566 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.566 [2024-11-20 05:56:14.368396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.566 [2024-11-20 05:56:14.368417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.567 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.567 [2024-11-20 05:56:14.380410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.567 [2024-11-20 05:56:14.380438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.567 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.567 [2024-11-20 05:56:14.392403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.567 [2024-11-20 05:56:14.392428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.567 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.567 [2024-11-20 05:56:14.404402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.567 [2024-11-20 05:56:14.404426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.567 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.567 [2024-11-20 05:56:14.416399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.567 [2024-11-20 05:56:14.416424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.567 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.567 [2024-11-20 05:56:14.428630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.567 [2024-11-20 05:56:14.428655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.567 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.567 [2024-11-20 05:56:14.440399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.567 [2024-11-20 05:56:14.440425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.567 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.567 Running I/O for 5 seconds... 00:44:54.567 [2024-11-20 05:56:14.456833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.567 [2024-11-20 05:56:14.456862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.567 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.567 [2024-11-20 05:56:14.472452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.567 [2024-11-20 05:56:14.472480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.567 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.567 [2024-11-20 05:56:14.482875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.567 [2024-11-20 05:56:14.482903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.497638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.497673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.512767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.512795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.528502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.528529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.540924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.540952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.556264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.556294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.570330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.570360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.585509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.585533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.601319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.601348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.617272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.617303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.633364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.633394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.649052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.649081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.665137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.665166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.680347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.680375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.693334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.693364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.708834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.708862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.724652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.724680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:54.827 [2024-11-20 05:56:14.737920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:54.827 [2024-11-20 05:56:14.737949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:54.827 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.088 [2024-11-20 05:56:14.753389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.088 [2024-11-20 05:56:14.753419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.088 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.088 [2024-11-20 05:56:14.768962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.088 [2024-11-20 05:56:14.768990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.088 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.088 [2024-11-20 05:56:14.784982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.088 [2024-11-20 05:56:14.785011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.088 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.088 [2024-11-20 05:56:14.801087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.088 [2024-11-20 05:56:14.801116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.088 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.088 [2024-11-20 05:56:14.816396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.088 [2024-11-20 05:56:14.816426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.088 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.088 [2024-11-20 05:56:14.830696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.088 [2024-11-20 05:56:14.830725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.088 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.088 [2024-11-20 05:56:14.846358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.088 [2024-11-20 05:56:14.846387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.088 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.088 [2024-11-20 05:56:14.861833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.088 [2024-11-20 05:56:14.861863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.088 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.088 [2024-11-20 05:56:14.876878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.088 [2024-11-20 05:56:14.876907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.088 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.088 [2024-11-20 05:56:14.892617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.088 [2024-11-20 05:56:14.892645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.088 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.088 [2024-11-20 05:56:14.905275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.088 [2024-11-20 05:56:14.905304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.088 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.088 [2024-11-20 05:56:14.920519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.088 [2024-11-20 05:56:14.920549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.088 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.088 [2024-11-20 05:56:14.933289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.088 [2024-11-20 05:56:14.933320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.088 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.089 [2024-11-20 05:56:14.948653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.089 [2024-11-20 05:56:14.948681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.089 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.089 [2024-11-20 05:56:14.961717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.089 [2024-11-20 05:56:14.961747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.089 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.089 [2024-11-20 05:56:14.976989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.089 [2024-11-20 05:56:14.977017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.089 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.089 [2024-11-20 05:56:14.992835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.089 [2024-11-20 05:56:14.992864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.089 2024/11/20 05:56:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.008544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.008572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.022718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.022748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.037995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.038024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.052319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.052353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.065679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.065707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.080785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.080812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.096635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.096663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.110292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.110322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.125793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.125834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.140723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.140751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.156790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.156819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.172116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.172144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.186632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.186661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.201814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.201843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.217131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.217161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.232209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.349 [2024-11-20 05:56:15.232240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.349 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.349 [2024-11-20 05:56:15.246457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.350 [2024-11-20 05:56:15.246494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.350 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.350 [2024-11-20 05:56:15.261594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.350 [2024-11-20 05:56:15.261623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.350 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.276605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.276633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.289824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.289851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.304865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.304893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.320150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.320179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.336159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.336190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.350809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.350836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.365877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.365905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.381523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.381553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.397069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.397100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.412472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.412500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.422727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.422755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.437910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.437940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 16261.00 IOPS, 127.04 MiB/s [2024-11-20T05:56:15.529Z] [2024-11-20 05:56:15.453071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.453099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.468757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.468787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.484812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.484841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.500573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.500601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.610 [2024-11-20 05:56:15.514319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.610 [2024-11-20 05:56:15.514349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.610 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.870 [2024-11-20 05:56:15.529830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.870 [2024-11-20 05:56:15.529861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.870 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.870 [2024-11-20 05:56:15.544750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.870 [2024-11-20 05:56:15.544780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.870 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.870 [2024-11-20 05:56:15.560581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.870 [2024-11-20 05:56:15.560610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.870 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.870 [2024-11-20 05:56:15.573713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.870 [2024-11-20 05:56:15.573742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.870 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.870 [2024-11-20 05:56:15.588703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.870 [2024-11-20 05:56:15.588733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.871 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.871 [2024-11-20 05:56:15.604494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.871 [2024-11-20 05:56:15.604519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.871 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.871 [2024-11-20 05:56:15.618512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.871 [2024-11-20 05:56:15.618539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.871 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.871 [2024-11-20 05:56:15.634339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.871 [2024-11-20 05:56:15.634369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.871 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.871 [2024-11-20 05:56:15.649716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.871 [2024-11-20 05:56:15.649746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.871 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.871 [2024-11-20 05:56:15.664832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.871 [2024-11-20 05:56:15.664861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.871 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.871 [2024-11-20 05:56:15.680202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.871 [2024-11-20 05:56:15.680233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.871 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.871 [2024-11-20 05:56:15.694345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.871 [2024-11-20 05:56:15.694375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.871 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.871 [2024-11-20 05:56:15.709593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.871 [2024-11-20 05:56:15.709622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.871 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.871 [2024-11-20 05:56:15.725178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.871 [2024-11-20 05:56:15.725207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.871 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.871 [2024-11-20 05:56:15.740053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.871 [2024-11-20 05:56:15.740084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.871 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.871 [2024-11-20 05:56:15.754569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.871 [2024-11-20 05:56:15.754599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.871 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.871 [2024-11-20 05:56:15.769595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.871 [2024-11-20 05:56:15.769625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:55.871 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:55.871 [2024-11-20 05:56:15.784825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:55.871 [2024-11-20 05:56:15.784854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.131 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.131 [2024-11-20 05:56:15.800232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.131 [2024-11-20 05:56:15.800262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.131 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.131 [2024-11-20 05:56:15.814657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.131 [2024-11-20 05:56:15.814686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.131 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.131 [2024-11-20 05:56:15.829712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.131 [2024-11-20 05:56:15.829740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.131 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.131 [2024-11-20 05:56:15.845403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.131 [2024-11-20 05:56:15.845433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.131 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.131 [2024-11-20 05:56:15.861347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.131 [2024-11-20 05:56:15.861377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.131 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.131 [2024-11-20 05:56:15.877130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.131 [2024-11-20 05:56:15.877160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.131 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.131 [2024-11-20 05:56:15.893098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.131 [2024-11-20 05:56:15.893127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.131 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.131 [2024-11-20 05:56:15.911981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.131 [2024-11-20 05:56:15.912011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.131 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.131 [2024-11-20 05:56:15.926727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.131 [2024-11-20 05:56:15.926757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.131 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.131 [2024-11-20 05:56:15.941833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.131 [2024-11-20 05:56:15.941859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.131 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.131 [2024-11-20 05:56:15.956593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.131 [2024-11-20 05:56:15.956622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.131 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.131 [2024-11-20 05:56:15.968495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.131 [2024-11-20 05:56:15.968525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.131 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.131 [2024-11-20 05:56:15.982474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.131 [2024-11-20 05:56:15.982504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.132 2024/11/20 05:56:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.132 [2024-11-20 05:56:15.997555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.132 [2024-11-20 05:56:15.997582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.132 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.132 [2024-11-20 05:56:16.012824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.132 [2024-11-20 05:56:16.012853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.132 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.132 [2024-11-20 05:56:16.028634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.132 [2024-11-20 05:56:16.028663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.132 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.132 [2024-11-20 05:56:16.042384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.132 [2024-11-20 05:56:16.042414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.132 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.057662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.057691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.072801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.072829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.088882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.088910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.104299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.104334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.118595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.118624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.133951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.133981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.149023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.149053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.164494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.164523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.178558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.178589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.193303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.193331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.208388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.208417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.223949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.223978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.238844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.238874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.253874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.253904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.268805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.268834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.285076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.285105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.392 [2024-11-20 05:56:16.300440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.392 [2024-11-20 05:56:16.300480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.392 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.652 [2024-11-20 05:56:16.314212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.314242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 [2024-11-20 05:56:16.329643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.329673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 [2024-11-20 05:56:16.344737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.344766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 [2024-11-20 05:56:16.360990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.361020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 [2024-11-20 05:56:16.376791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.376820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 [2024-11-20 05:56:16.392768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.392797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 [2024-11-20 05:56:16.408735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.408763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 [2024-11-20 05:56:16.428227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.428258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 [2024-11-20 05:56:16.442755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.442785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 16239.00 IOPS, 126.87 MiB/s [2024-11-20T05:56:16.572Z] [2024-11-20 05:56:16.458393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.458422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 [2024-11-20 05:56:16.473544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.473573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 [2024-11-20 05:56:16.488639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.488669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 [2024-11-20 05:56:16.499853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.499883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 [2024-11-20 05:56:16.514160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.514190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 [2024-11-20 05:56:16.529383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.529413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 [2024-11-20 05:56:16.545792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.545820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.653 [2024-11-20 05:56:16.561656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.653 [2024-11-20 05:56:16.561684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.653 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.913 [2024-11-20 05:56:16.577274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.913 [2024-11-20 05:56:16.577304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.913 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.913 [2024-11-20 05:56:16.592702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.913 [2024-11-20 05:56:16.592734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.913 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.913 [2024-11-20 05:56:16.608552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.913 [2024-11-20 05:56:16.608580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.913 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.913 [2024-11-20 05:56:16.619125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.913 [2024-11-20 05:56:16.619154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.913 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.913 [2024-11-20 05:56:16.634673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.913 [2024-11-20 05:56:16.634703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.914 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.914 [2024-11-20 05:56:16.649978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.914 [2024-11-20 05:56:16.650008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.914 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.914 [2024-11-20 05:56:16.665574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.914 [2024-11-20 05:56:16.665602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.914 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.914 [2024-11-20 05:56:16.680824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.914 [2024-11-20 05:56:16.680854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.914 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.914 [2024-11-20 05:56:16.696968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.914 [2024-11-20 05:56:16.696998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.914 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.914 [2024-11-20 05:56:16.712222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.914 [2024-11-20 05:56:16.712252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.914 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.914 [2024-11-20 05:56:16.724699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.914 [2024-11-20 05:56:16.724727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.914 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.914 [2024-11-20 05:56:16.740652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.914 [2024-11-20 05:56:16.740681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.914 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.914 [2024-11-20 05:56:16.752608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.914 [2024-11-20 05:56:16.752636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.914 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.914 [2024-11-20 05:56:16.766341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.914 [2024-11-20 05:56:16.766372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.914 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.914 [2024-11-20 05:56:16.781754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.914 [2024-11-20 05:56:16.781783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.914 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.914 [2024-11-20 05:56:16.796613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.914 [2024-11-20 05:56:16.796641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.914 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.914 [2024-11-20 05:56:16.809042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.914 [2024-11-20 05:56:16.809071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.914 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:56.914 [2024-11-20 05:56:16.824697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:56.914 [2024-11-20 05:56:16.824732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:56.914 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.173 [2024-11-20 05:56:16.841179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.173 [2024-11-20 05:56:16.841207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.173 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.173 [2024-11-20 05:56:16.856899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.173 [2024-11-20 05:56:16.856929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.173 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.173 [2024-11-20 05:56:16.872184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.173 [2024-11-20 05:56:16.872216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.173 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.173 [2024-11-20 05:56:16.886871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.173 [2024-11-20 05:56:16.886900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.173 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.173 [2024-11-20 05:56:16.902131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.173 [2024-11-20 05:56:16.902163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.173 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.173 [2024-11-20 05:56:16.917142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.173 [2024-11-20 05:56:16.917172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.173 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.173 [2024-11-20 05:56:16.932295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.174 [2024-11-20 05:56:16.932329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.174 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.174 [2024-11-20 05:56:16.946788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.174 [2024-11-20 05:56:16.946817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.174 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.174 [2024-11-20 05:56:16.962092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.174 [2024-11-20 05:56:16.962122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.174 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.174 [2024-11-20 05:56:16.977019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.174 [2024-11-20 05:56:16.977050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.174 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.174 [2024-11-20 05:56:16.992786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.174 [2024-11-20 05:56:16.992815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.174 2024/11/20 05:56:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.174 [2024-11-20 05:56:17.008456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.174 [2024-11-20 05:56:17.008485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.174 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.174 [2024-11-20 05:56:17.022514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.174 [2024-11-20 05:56:17.022545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.174 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.174 [2024-11-20 05:56:17.037791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.174 [2024-11-20 05:56:17.037820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.174 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.174 [2024-11-20 05:56:17.052737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.174 [2024-11-20 05:56:17.052765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.174 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.174 [2024-11-20 05:56:17.067987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.174 [2024-11-20 05:56:17.068017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.174 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.174 [2024-11-20 05:56:17.083046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.174 [2024-11-20 05:56:17.083076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.174 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.098793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.098822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.116249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.116280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.129502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.129530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.144710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.144738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.161260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.161289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.176180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.176210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.190361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.190390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.205674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.205715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.220558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.220587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.234462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.234490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.249216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.249244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.264721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.264751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.280202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.280232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.294058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.294087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.309345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.309376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.324642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.324670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.336405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.336436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.435 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.435 [2024-11-20 05:56:17.350585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.435 [2024-11-20 05:56:17.350613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.695 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.695 [2024-11-20 05:56:17.366020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.695 [2024-11-20 05:56:17.366048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.695 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.695 [2024-11-20 05:56:17.381089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.695 [2024-11-20 05:56:17.381119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.695 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.695 [2024-11-20 05:56:17.396071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.695 [2024-11-20 05:56:17.396100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.695 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.695 [2024-11-20 05:56:17.412639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.695 [2024-11-20 05:56:17.412668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.695 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.695 [2024-11-20 05:56:17.424439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.695 [2024-11-20 05:56:17.424478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.695 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.695 [2024-11-20 05:56:17.438687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.695 [2024-11-20 05:56:17.438714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.695 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.695 16211.33 IOPS, 126.65 MiB/s [2024-11-20T05:56:17.614Z] [2024-11-20 05:56:17.454395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.695 [2024-11-20 05:56:17.454428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.695 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.695 [2024-11-20 05:56:17.470184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.695 [2024-11-20 05:56:17.470214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.695 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.695 [2024-11-20 05:56:17.485345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.695 [2024-11-20 05:56:17.485374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.695 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.695 [2024-11-20 05:56:17.500336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.695 [2024-11-20 05:56:17.500365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.696 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.696 [2024-11-20 05:56:17.512744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.696 [2024-11-20 05:56:17.512772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.696 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.696 [2024-11-20 05:56:17.528893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.696 [2024-11-20 05:56:17.528923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.696 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.696 [2024-11-20 05:56:17.544867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.696 [2024-11-20 05:56:17.544895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.696 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.696 [2024-11-20 05:56:17.560335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.696 [2024-11-20 05:56:17.560363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.696 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.696 [2024-11-20 05:56:17.574287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.696 [2024-11-20 05:56:17.574318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.696 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.696 [2024-11-20 05:56:17.589487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.696 [2024-11-20 05:56:17.589513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.696 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.696 [2024-11-20 05:56:17.604494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.696 [2024-11-20 05:56:17.604522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.696 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.616061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.955 [2024-11-20 05:56:17.616091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.955 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.630661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.955 [2024-11-20 05:56:17.630689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.955 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.646180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.955 [2024-11-20 05:56:17.646208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.955 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.661119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.955 [2024-11-20 05:56:17.661147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.955 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.676712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.955 [2024-11-20 05:56:17.676740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.955 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.692103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.955 [2024-11-20 05:56:17.692133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.955 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.706667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.955 [2024-11-20 05:56:17.706695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.955 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.721341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.955 [2024-11-20 05:56:17.721370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.955 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.736937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.955 [2024-11-20 05:56:17.736967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.955 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.752749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.955 [2024-11-20 05:56:17.752777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.955 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.768080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.955 [2024-11-20 05:56:17.768109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.955 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.782928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.955 [2024-11-20 05:56:17.782957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.955 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.799910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.955 [2024-11-20 05:56:17.799939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.955 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.814662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.955 [2024-11-20 05:56:17.814689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.955 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.830307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.955 [2024-11-20 05:56:17.830339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.955 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.955 [2024-11-20 05:56:17.845053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.956 [2024-11-20 05:56:17.845081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.956 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:57.956 [2024-11-20 05:56:17.861267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:57.956 [2024-11-20 05:56:17.861295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:57.956 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.215 [2024-11-20 05:56:17.876427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.215 [2024-11-20 05:56:17.876464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.215 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.215 [2024-11-20 05:56:17.889136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.215 [2024-11-20 05:56:17.889166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.215 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.215 [2024-11-20 05:56:17.904346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.215 [2024-11-20 05:56:17.904374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.215 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.215 [2024-11-20 05:56:17.918740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.215 [2024-11-20 05:56:17.918769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.215 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.215 [2024-11-20 05:56:17.933535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.215 [2024-11-20 05:56:17.933564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.215 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.216 [2024-11-20 05:56:17.948315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.216 [2024-11-20 05:56:17.948350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.216 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.216 [2024-11-20 05:56:17.959761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.216 [2024-11-20 05:56:17.959791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.216 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.216 [2024-11-20 05:56:17.975129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.216 [2024-11-20 05:56:17.975159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.216 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.216 [2024-11-20 05:56:17.990214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.216 [2024-11-20 05:56:17.990244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.216 2024/11/20 05:56:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.216 [2024-11-20 05:56:18.005352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.216 [2024-11-20 05:56:18.005382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.216 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.216 [2024-11-20 05:56:18.020773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.216 [2024-11-20 05:56:18.020803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.216 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.216 [2024-11-20 05:56:18.035890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.216 [2024-11-20 05:56:18.035919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.216 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.216 [2024-11-20 05:56:18.050499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.216 [2024-11-20 05:56:18.050530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.216 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.216 [2024-11-20 05:56:18.065745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.216 [2024-11-20 05:56:18.065775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.216 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.216 [2024-11-20 05:56:18.081116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.216 [2024-11-20 05:56:18.081145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.216 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.216 [2024-11-20 05:56:18.096190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.216 [2024-11-20 05:56:18.096221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.216 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.216 [2024-11-20 05:56:18.107581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.216 [2024-11-20 05:56:18.107610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.216 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.216 [2024-11-20 05:56:18.122027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.216 [2024-11-20 05:56:18.122057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.216 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.476 [2024-11-20 05:56:18.137126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.476 [2024-11-20 05:56:18.137156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.476 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.476 [2024-11-20 05:56:18.153005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.476 [2024-11-20 05:56:18.153035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.476 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.476 [2024-11-20 05:56:18.168172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.476 [2024-11-20 05:56:18.168203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.476 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.476 [2024-11-20 05:56:18.184362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.476 [2024-11-20 05:56:18.184391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.476 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.476 [2024-11-20 05:56:18.197886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.476 [2024-11-20 05:56:18.197916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.476 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.476 [2024-11-20 05:56:18.213417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.476 [2024-11-20 05:56:18.213457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.476 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.476 [2024-11-20 05:56:18.228158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.476 [2024-11-20 05:56:18.228188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.476 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.476 [2024-11-20 05:56:18.244123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.476 [2024-11-20 05:56:18.244152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.476 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.476 [2024-11-20 05:56:18.260606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.476 [2024-11-20 05:56:18.260635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.476 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.476 [2024-11-20 05:56:18.274299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.476 [2024-11-20 05:56:18.274329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.476 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.476 [2024-11-20 05:56:18.289399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.476 [2024-11-20 05:56:18.289428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.476 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.476 [2024-11-20 05:56:18.304275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.476 [2024-11-20 05:56:18.304305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.476 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.476 [2024-11-20 05:56:18.318297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.476 [2024-11-20 05:56:18.318326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.476 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.476 [2024-11-20 05:56:18.333806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.477 [2024-11-20 05:56:18.333836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.477 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.477 [2024-11-20 05:56:18.349551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.477 [2024-11-20 05:56:18.349592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.477 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.477 [2024-11-20 05:56:18.365112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.477 [2024-11-20 05:56:18.365141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.477 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.477 [2024-11-20 05:56:18.381122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.477 [2024-11-20 05:56:18.381153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.477 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.736 [2024-11-20 05:56:18.396257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.736 [2024-11-20 05:56:18.396287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.736 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.736 [2024-11-20 05:56:18.411239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.736 [2024-11-20 05:56:18.411268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.736 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.736 [2024-11-20 05:56:18.428295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.736 [2024-11-20 05:56:18.428340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.736 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.736 [2024-11-20 05:56:18.442057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.736 [2024-11-20 05:56:18.442086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.736 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.736 16227.00 IOPS, 126.77 MiB/s [2024-11-20T05:56:18.655Z] [2024-11-20 05:56:18.457416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.736 [2024-11-20 05:56:18.457456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.736 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.736 [2024-11-20 05:56:18.475969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.736 [2024-11-20 05:56:18.475999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.736 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.736 [2024-11-20 05:56:18.490636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.736 [2024-11-20 05:56:18.490664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.736 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.736 [2024-11-20 05:56:18.506373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.736 [2024-11-20 05:56:18.506403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.736 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.736 [2024-11-20 05:56:18.521537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.736 [2024-11-20 05:56:18.521563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.736 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.736 [2024-11-20 05:56:18.536492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.736 [2024-11-20 05:56:18.536520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.736 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.736 [2024-11-20 05:56:18.547273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.736 [2024-11-20 05:56:18.547302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.737 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.737 [2024-11-20 05:56:18.562406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.737 [2024-11-20 05:56:18.562434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.737 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.737 [2024-11-20 05:56:18.577400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.737 [2024-11-20 05:56:18.577428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.737 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.737 [2024-11-20 05:56:18.593132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.737 [2024-11-20 05:56:18.593160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.737 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.737 [2024-11-20 05:56:18.608195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.737 [2024-11-20 05:56:18.608225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.737 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.737 [2024-11-20 05:56:18.624459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.737 [2024-11-20 05:56:18.624486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.737 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.737 [2024-11-20 05:56:18.638082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.737 [2024-11-20 05:56:18.638111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.737 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.737 [2024-11-20 05:56:18.652965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.737 [2024-11-20 05:56:18.652995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.668993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.669020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.684592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.684619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.698215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.698245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.713208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.713237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.728835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.728863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.744657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.744685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.754946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.754974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.770147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.770176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.785352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.785380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.800180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.800209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.816435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.816472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.830679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.830706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.845896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.845924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.861232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.861260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.876501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.876530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.890109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.890137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:58.997 [2024-11-20 05:56:18.905385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:58.997 [2024-11-20 05:56:18.905414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:58.997 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.257 [2024-11-20 05:56:18.920593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.257 [2024-11-20 05:56:18.920623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.257 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.257 [2024-11-20 05:56:18.932031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.257 [2024-11-20 05:56:18.932060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.257 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.257 [2024-11-20 05:56:18.946208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.257 [2024-11-20 05:56:18.946237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.257 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.257 [2024-11-20 05:56:18.961577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.257 [2024-11-20 05:56:18.961605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.257 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.257 [2024-11-20 05:56:18.976888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.257 [2024-11-20 05:56:18.976918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.257 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.257 [2024-11-20 05:56:18.992471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.257 [2024-11-20 05:56:18.992498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.257 2024/11/20 05:56:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.257 [2024-11-20 05:56:19.005296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.257 [2024-11-20 05:56:19.005325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.257 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.257 [2024-11-20 05:56:19.020440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.257 [2024-11-20 05:56:19.020478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.257 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.257 [2024-11-20 05:56:19.030992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.257 [2024-11-20 05:56:19.031020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.257 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.258 [2024-11-20 05:56:19.046309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.258 [2024-11-20 05:56:19.046338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.258 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.258 [2024-11-20 05:56:19.061260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.258 [2024-11-20 05:56:19.061288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.258 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.258 [2024-11-20 05:56:19.076644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.258 [2024-11-20 05:56:19.076671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.258 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.258 [2024-11-20 05:56:19.089788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.258 [2024-11-20 05:56:19.089817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.258 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.258 [2024-11-20 05:56:19.105111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.258 [2024-11-20 05:56:19.105139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.258 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.258 [2024-11-20 05:56:19.120744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.258 [2024-11-20 05:56:19.120772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.258 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.258 [2024-11-20 05:56:19.136545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.258 [2024-11-20 05:56:19.136572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.258 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.258 [2024-11-20 05:56:19.150140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.258 [2024-11-20 05:56:19.150170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.258 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.258 [2024-11-20 05:56:19.165133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.258 [2024-11-20 05:56:19.165161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.258 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.180754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.180782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.196416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.196460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.208828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.208856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.224561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.224589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.234933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.234961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.249882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.249910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.264776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.264803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.280648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.280676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.293589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.293617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.309153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.309183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.324864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.324892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.340755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.340783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.356572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.356602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.370472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.370501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.385600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.385629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.400559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.400587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.411245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.411272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.517 [2024-11-20 05:56:19.426231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.517 [2024-11-20 05:56:19.426261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.517 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 [2024-11-20 05:56:19.441312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.777 [2024-11-20 05:56:19.441341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.777 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 16247.80 IOPS, 126.94 MiB/s [2024-11-20T05:56:19.696Z] [2024-11-20 05:56:19.453512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.777 [2024-11-20 05:56:19.453538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.777 00:44:59.777 Latency(us) 00:44:59.777 [2024-11-20T05:56:19.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:59.777 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:44:59.777 Nvme1n1 : 5.01 16248.71 126.94 0.00 0.00 7869.38 1903.66 14355.50 00:44:59.777 [2024-11-20T05:56:19.696Z] =================================================================================================================== 00:44:59.777 [2024-11-20T05:56:19.696Z] Total : 16248.71 126.94 0.00 0.00 7869.38 1903.66 14355.50 00:44:59.777 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 [2024-11-20 05:56:19.464401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.777 [2024-11-20 05:56:19.464427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.777 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 [2024-11-20 05:56:19.476395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.777 [2024-11-20 05:56:19.476417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.777 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 [2024-11-20 05:56:19.488395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.777 [2024-11-20 05:56:19.488412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.777 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 [2024-11-20 05:56:19.500394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.777 [2024-11-20 05:56:19.500413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.777 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 [2024-11-20 05:56:19.512394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.777 [2024-11-20 05:56:19.512411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.777 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 [2024-11-20 05:56:19.524394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.777 [2024-11-20 05:56:19.524411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.777 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 [2024-11-20 05:56:19.536398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.777 [2024-11-20 05:56:19.536419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.777 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 [2024-11-20 05:56:19.548394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.777 [2024-11-20 05:56:19.548412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.777 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 [2024-11-20 05:56:19.560395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.777 [2024-11-20 05:56:19.560415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.777 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 [2024-11-20 05:56:19.572395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.777 [2024-11-20 05:56:19.572415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.777 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 [2024-11-20 05:56:19.584393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.777 [2024-11-20 05:56:19.584412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.777 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 [2024-11-20 05:56:19.596390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.777 [2024-11-20 05:56:19.596408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.777 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 [2024-11-20 05:56:19.608392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.777 [2024-11-20 05:56:19.608411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.777 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.777 [2024-11-20 05:56:19.620410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.778 [2024-11-20 05:56:19.620431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.778 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.778 [2024-11-20 05:56:19.632392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.778 [2024-11-20 05:56:19.632409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.778 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.778 [2024-11-20 05:56:19.644388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.778 [2024-11-20 05:56:19.644404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.778 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.778 [2024-11-20 05:56:19.656392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.778 [2024-11-20 05:56:19.656408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.778 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.778 [2024-11-20 05:56:19.668388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.778 [2024-11-20 05:56:19.668406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.778 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.778 [2024-11-20 05:56:19.680391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.778 [2024-11-20 05:56:19.680409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:44:59.778 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:44:59.778 [2024-11-20 05:56:19.692389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:44:59.778 [2024-11-20 05:56:19.692404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:00.038 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:00.038 [2024-11-20 05:56:19.704391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:00.038 [2024-11-20 05:56:19.704410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:00.038 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:00.038 [2024-11-20 05:56:19.716391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:00.038 [2024-11-20 05:56:19.716409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:00.038 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:00.038 [2024-11-20 05:56:19.728397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:00.038 [2024-11-20 05:56:19.728416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:00.038 2024/11/20 05:56:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:00.038 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (105541) - No such process 00:45:00.038 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 105541 00:45:00.038 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:45:00.038 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:00.038 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:00.038 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:00.038 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:45:00.038 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:00.038 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:00.038 delay0 00:45:00.038 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:00.038 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:45:00.038 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:00.038 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:00.038 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:00.038 05:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:45:00.038 [2024-11-20 05:56:19.934076] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:45:08.163 Initializing NVMe Controllers 00:45:08.163 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:45:08.163 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:45:08.163 Initialization complete. Launching workers. 00:45:08.163 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 270, failed: 21592 00:45:08.164 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21754, failed to submit 108 00:45:08.164 success 21681, unsuccessful 73, failed 0 00:45:08.164 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:45:08.164 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:45:08.164 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:08.164 05:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:08.164 rmmod nvme_tcp 00:45:08.164 rmmod nvme_fabrics 00:45:08.164 rmmod nvme_keyring 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 105390 ']' 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 105390 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 105390 ']' 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 105390 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 105390 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:45:08.164 killing process with pid 105390 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 105390' 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 105390 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 105390 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:45:08.164 00:45:08.164 real 0m25.816s 00:45:08.164 user 0m37.075s 00:45:08.164 sys 0m11.290s 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:08.164 ************************************ 00:45:08.164 END TEST nvmf_zcopy 00:45:08.164 ************************************ 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:08.164 ************************************ 00:45:08.164 START TEST nvmf_nmic 00:45:08.164 ************************************ 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:45:08.164 * Looking for test storage... 00:45:08.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:08.164 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:08.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:08.164 --rc genhtml_branch_coverage=1 00:45:08.164 --rc genhtml_function_coverage=1 00:45:08.164 --rc genhtml_legend=1 00:45:08.164 --rc geninfo_all_blocks=1 00:45:08.165 --rc geninfo_unexecuted_blocks=1 00:45:08.165 00:45:08.165 ' 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:08.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:08.165 --rc genhtml_branch_coverage=1 00:45:08.165 --rc genhtml_function_coverage=1 00:45:08.165 --rc genhtml_legend=1 00:45:08.165 --rc geninfo_all_blocks=1 00:45:08.165 --rc geninfo_unexecuted_blocks=1 00:45:08.165 00:45:08.165 ' 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:08.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:08.165 --rc genhtml_branch_coverage=1 00:45:08.165 --rc genhtml_function_coverage=1 00:45:08.165 --rc genhtml_legend=1 00:45:08.165 --rc geninfo_all_blocks=1 00:45:08.165 --rc geninfo_unexecuted_blocks=1 00:45:08.165 00:45:08.165 ' 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:08.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:08.165 --rc genhtml_branch_coverage=1 00:45:08.165 --rc genhtml_function_coverage=1 00:45:08.165 --rc genhtml_legend=1 00:45:08.165 --rc geninfo_all_blocks=1 00:45:08.165 --rc geninfo_unexecuted_blocks=1 00:45:08.165 00:45:08.165 ' 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:08.165 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:08.166 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:45:08.166 Cannot find device "nvmf_init_br" 00:45:08.166 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:45:08.166 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:45:08.166 Cannot find device "nvmf_init_br2" 00:45:08.166 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:45:08.166 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:45:08.166 Cannot find device "nvmf_tgt_br" 00:45:08.166 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:45:08.166 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:45:08.166 Cannot find device "nvmf_tgt_br2" 00:45:08.166 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:45:08.166 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:45:08.166 Cannot find device "nvmf_init_br" 00:45:08.166 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:45:08.166 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:45:08.166 Cannot find device "nvmf_init_br2" 00:45:08.166 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:45:08.166 05:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:45:08.166 Cannot find device "nvmf_tgt_br" 00:45:08.166 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:45:08.166 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:45:08.166 Cannot find device "nvmf_tgt_br2" 00:45:08.166 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:45:08.166 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:45:08.166 Cannot find device "nvmf_br" 00:45:08.166 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:45:08.166 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:45:08.166 Cannot find device "nvmf_init_if" 00:45:08.166 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:45:08.166 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:45:08.166 Cannot find device "nvmf_init_if2" 00:45:08.166 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:45:08.166 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:08.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:08.166 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:45:08.166 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:08.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:08.425 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:45:08.684 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:45:08.684 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:08.684 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:45:08.684 00:45:08.684 --- 10.0.0.3 ping statistics --- 00:45:08.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:08.685 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:45:08.685 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:45:08.685 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:45:08.685 00:45:08.685 --- 10.0.0.4 ping statistics --- 00:45:08.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:08.685 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:08.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:08.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:45:08.685 00:45:08.685 --- 10.0.0.1 ping statistics --- 00:45:08.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:08.685 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:45:08.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:08.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:45:08.685 00:45:08.685 --- 10.0.0.2 ping statistics --- 00:45:08.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:08.685 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=105931 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 105931 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 105931 ']' 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:08.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:08.685 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:08.685 [2024-11-20 05:56:28.439213] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:08.685 [2024-11-20 05:56:28.440080] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:45:08.685 [2024-11-20 05:56:28.440556] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:08.685 [2024-11-20 05:56:28.590107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:08.943 [2024-11-20 05:56:28.675225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:08.943 [2024-11-20 05:56:28.675288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:08.943 [2024-11-20 05:56:28.675304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:08.943 [2024-11-20 05:56:28.675318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:08.943 [2024-11-20 05:56:28.675329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:08.943 [2024-11-20 05:56:28.676881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:08.943 [2024-11-20 05:56:28.676987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:08.943 [2024-11-20 05:56:28.677155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:08.943 [2024-11-20 05:56:28.677159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:08.943 [2024-11-20 05:56:28.823086] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:08.943 [2024-11-20 05:56:28.823141] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:08.943 [2024-11-20 05:56:28.824539] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:45:08.943 [2024-11-20 05:56:28.824705] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:45:08.943 [2024-11-20 05:56:28.825558] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:45:08.943 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:08.943 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:45:08.943 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:08.943 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:08.943 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:09.202 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:09.202 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:09.202 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.202 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:09.202 [2024-11-20 05:56:28.922342] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:09.202 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.202 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:45:09.202 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.202 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:09.202 Malloc0 00:45:09.202 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.202 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:45:09.202 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.202 05:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:09.202 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.202 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:09.202 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.202 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:09.202 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.202 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:45:09.202 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.202 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:09.202 [2024-11-20 05:56:29.018358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:09.202 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.202 test case1: single bdev can't be used in multiple subsystems 00:45:09.202 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:45:09.202 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:09.203 [2024-11-20 05:56:29.046101] bdev.c:8311:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:45:09.203 [2024-11-20 05:56:29.046131] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:45:09.203 [2024-11-20 05:56:29.046142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:09.203 2024/11/20 05:56:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:45:09.203 request: 00:45:09.203 { 00:45:09.203 "method": "nvmf_subsystem_add_ns", 00:45:09.203 "params": { 00:45:09.203 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:45:09.203 "namespace": { 00:45:09.203 "bdev_name": "Malloc0", 00:45:09.203 "no_auto_visible": false 00:45:09.203 } 00:45:09.203 } 00:45:09.203 } 00:45:09.203 Got JSON-RPC error response 00:45:09.203 GoRPCClient: error on JSON-RPC call 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:45:09.203 Adding namespace failed - expected result. 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:45:09.203 test case2: host connect to nvmf target in multiple paths 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:09.203 [2024-11-20 05:56:29.062208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:09.203 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:45:09.461 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:45:09.461 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:45:09.461 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:45:09.461 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:45:09.461 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:45:09.461 05:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:45:11.393 05:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:45:11.393 05:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:45:11.393 05:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:45:11.393 05:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:45:11.393 05:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:45:11.393 05:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:45:11.393 05:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:45:11.393 [global] 00:45:11.393 thread=1 00:45:11.393 invalidate=1 00:45:11.393 rw=write 00:45:11.393 time_based=1 00:45:11.393 runtime=1 00:45:11.393 ioengine=libaio 00:45:11.393 direct=1 00:45:11.393 bs=4096 00:45:11.393 iodepth=1 00:45:11.393 norandommap=0 00:45:11.393 numjobs=1 00:45:11.393 00:45:11.393 verify_dump=1 00:45:11.393 verify_backlog=512 00:45:11.393 verify_state_save=0 00:45:11.393 do_verify=1 00:45:11.393 verify=crc32c-intel 00:45:11.652 [job0] 00:45:11.652 filename=/dev/nvme0n1 00:45:11.652 Could not set queue depth (nvme0n1) 00:45:11.652 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:45:11.652 fio-3.35 00:45:11.652 Starting 1 thread 00:45:13.031 00:45:13.031 job0: (groupid=0, jobs=1): err= 0: pid=106023: Wed Nov 20 05:56:32 2024 00:45:13.031 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:45:13.031 slat (nsec): min=8570, max=33025, avg=10523.65, stdev=2662.19 00:45:13.031 clat (usec): min=122, max=2781, avg=141.62, stdev=62.89 00:45:13.031 lat (usec): min=131, max=2807, avg=152.14, stdev=63.35 00:45:13.031 clat percentiles (usec): 00:45:13.031 | 1.00th=[ 127], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 133], 00:45:13.032 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:45:13.032 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 151], 95.00th=[ 157], 00:45:13.032 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 865], 99.95th=[ 2212], 00:45:13.032 | 99.99th=[ 2769] 00:45:13.032 write: IOPS=3912, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1001msec); 0 zone resets 00:45:13.032 slat (usec): min=12, max=112, avg=15.64, stdev= 5.32 00:45:13.032 clat (usec): min=79, max=1001, avg=98.59, stdev=22.42 00:45:13.032 lat (usec): min=92, max=1020, avg=114.23, stdev=24.05 00:45:13.032 clat percentiles (usec): 00:45:13.032 | 1.00th=[ 84], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 92], 00:45:13.032 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 99], 00:45:13.032 | 70.00th=[ 101], 80.00th=[ 103], 90.00th=[ 109], 95.00th=[ 113], 00:45:13.032 | 99.00th=[ 124], 99.50th=[ 130], 99.90th=[ 302], 99.95th=[ 807], 00:45:13.032 | 99.99th=[ 1004] 00:45:13.032 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:45:13.032 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:45:13.032 lat (usec) : 100=34.72%, 250=65.11%, 500=0.08%, 750=0.01%, 1000=0.03% 00:45:13.032 lat (msec) : 2=0.03%, 4=0.03% 00:45:13.032 cpu : usr=1.90%, sys=7.40%, ctx=7500, majf=0, minf=5 00:45:13.032 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:13.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:13.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:13.032 issued rwts: total=3584,3916,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:13.032 latency : target=0, window=0, percentile=100.00%, depth=1 00:45:13.032 00:45:13.032 Run status group 0 (all jobs): 00:45:13.032 READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:45:13.032 WRITE: bw=15.3MiB/s (16.0MB/s), 15.3MiB/s-15.3MiB/s (16.0MB/s-16.0MB/s), io=15.3MiB (16.0MB), run=1001-1001msec 00:45:13.032 00:45:13.032 Disk stats (read/write): 00:45:13.032 nvme0n1: ios=3198/3584, merge=0/0, ticks=457/376, in_queue=833, util=90.78% 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:45:13.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:13.032 rmmod nvme_tcp 00:45:13.032 rmmod nvme_fabrics 00:45:13.032 rmmod nvme_keyring 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 105931 ']' 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 105931 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 105931 ']' 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 105931 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 105931 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 105931' 00:45:13.032 killing process with pid 105931 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 105931 00:45:13.032 05:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 105931 00:45:13.292 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:13.292 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:13.292 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:13.292 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:45:13.292 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:13.292 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:45:13.292 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:45:13.292 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:13.292 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:45:13.292 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:45:13.292 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:45:13.552 ************************************ 00:45:13.552 END TEST nvmf_nmic 00:45:13.552 ************************************ 00:45:13.552 00:45:13.552 real 0m5.795s 00:45:13.552 user 0m15.106s 00:45:13.552 sys 0m2.314s 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:13.552 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:13.811 ************************************ 00:45:13.811 START TEST nvmf_fio_target 00:45:13.811 ************************************ 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:45:13.811 * Looking for test storage... 00:45:13.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:45:13.811 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:13.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:13.812 --rc genhtml_branch_coverage=1 00:45:13.812 --rc genhtml_function_coverage=1 00:45:13.812 --rc genhtml_legend=1 00:45:13.812 --rc geninfo_all_blocks=1 00:45:13.812 --rc geninfo_unexecuted_blocks=1 00:45:13.812 00:45:13.812 ' 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:13.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:13.812 --rc genhtml_branch_coverage=1 00:45:13.812 --rc genhtml_function_coverage=1 00:45:13.812 --rc genhtml_legend=1 00:45:13.812 --rc geninfo_all_blocks=1 00:45:13.812 --rc geninfo_unexecuted_blocks=1 00:45:13.812 00:45:13.812 ' 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:13.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:13.812 --rc genhtml_branch_coverage=1 00:45:13.812 --rc genhtml_function_coverage=1 00:45:13.812 --rc genhtml_legend=1 00:45:13.812 --rc geninfo_all_blocks=1 00:45:13.812 --rc geninfo_unexecuted_blocks=1 00:45:13.812 00:45:13.812 ' 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:13.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:13.812 --rc genhtml_branch_coverage=1 00:45:13.812 --rc genhtml_function_coverage=1 00:45:13.812 --rc genhtml_legend=1 00:45:13.812 --rc geninfo_all_blocks=1 00:45:13.812 --rc geninfo_unexecuted_blocks=1 00:45:13.812 00:45:13.812 ' 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:13.812 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:13.813 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:13.813 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:13.813 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:13.813 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:13.813 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:14.072 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:45:14.073 Cannot find device "nvmf_init_br" 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:45:14.073 Cannot find device "nvmf_init_br2" 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:45:14.073 Cannot find device "nvmf_tgt_br" 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:45:14.073 Cannot find device "nvmf_tgt_br2" 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:45:14.073 Cannot find device "nvmf_init_br" 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:45:14.073 Cannot find device "nvmf_init_br2" 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:45:14.073 Cannot find device "nvmf_tgt_br" 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:45:14.073 Cannot find device "nvmf_tgt_br2" 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:45:14.073 Cannot find device "nvmf_br" 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:45:14.073 Cannot find device "nvmf_init_if" 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:45:14.073 Cannot find device "nvmf_init_if2" 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:14.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:14.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:45:14.073 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:14.074 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:14.074 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:14.074 05:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:45:14.334 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:45:14.335 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:14.335 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:45:14.335 00:45:14.335 --- 10.0.0.3 ping statistics --- 00:45:14.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:14.335 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:45:14.335 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:45:14.335 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:45:14.335 00:45:14.335 --- 10.0.0.4 ping statistics --- 00:45:14.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:14.335 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:14.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:14.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:45:14.335 00:45:14.335 --- 10.0.0.1 ping statistics --- 00:45:14.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:14.335 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:45:14.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:14.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:45:14.335 00:45:14.335 --- 10.0.0.2 ping statistics --- 00:45:14.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:14.335 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=106256 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 106256 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 106256 ']' 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:14.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:14.335 05:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:45:14.594 [2024-11-20 05:56:34.301098] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:14.594 [2024-11-20 05:56:34.302617] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:45:14.594 [2024-11-20 05:56:34.303361] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:14.594 [2024-11-20 05:56:34.465787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:14.853 [2024-11-20 05:56:34.543413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:14.853 [2024-11-20 05:56:34.543474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:14.853 [2024-11-20 05:56:34.543485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:14.853 [2024-11-20 05:56:34.543494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:14.853 [2024-11-20 05:56:34.543501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:14.853 [2024-11-20 05:56:34.544945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:14.853 [2024-11-20 05:56:34.545034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:14.853 [2024-11-20 05:56:34.545207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:14.853 [2024-11-20 05:56:34.545209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:14.853 [2024-11-20 05:56:34.681513] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:14.853 [2024-11-20 05:56:34.681680] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:14.853 [2024-11-20 05:56:34.682874] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:45:14.853 [2024-11-20 05:56:34.683141] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:45:14.853 [2024-11-20 05:56:34.683958] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:45:15.421 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:15.421 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:45:15.421 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:15.421 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:15.421 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:45:15.680 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:15.680 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:45:15.939 [2024-11-20 05:56:35.750133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:15.939 05:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:45:16.507 05:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:45:16.507 05:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:45:16.507 05:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:45:16.507 05:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:45:16.766 05:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:45:16.766 05:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:45:17.025 05:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:45:17.025 05:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:45:17.284 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:45:17.852 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:45:17.852 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:45:18.111 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:45:18.111 05:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:45:18.370 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:45:18.370 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:45:18.629 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:45:19.196 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:45:19.196 05:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:19.197 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:45:19.197 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:45:19.456 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:45:19.716 [2024-11-20 05:56:39.494094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:19.716 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:45:19.975 05:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:45:20.234 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:45:20.234 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:45:20.234 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:45:20.234 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:45:20.234 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:45:20.234 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:45:20.234 05:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:45:22.771 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:45:22.771 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:45:22.771 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:45:22.771 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:45:22.771 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:45:22.771 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:45:22.771 05:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:45:22.771 [global] 00:45:22.771 thread=1 00:45:22.771 invalidate=1 00:45:22.771 rw=write 00:45:22.771 time_based=1 00:45:22.771 runtime=1 00:45:22.771 ioengine=libaio 00:45:22.771 direct=1 00:45:22.771 bs=4096 00:45:22.771 iodepth=1 00:45:22.771 norandommap=0 00:45:22.771 numjobs=1 00:45:22.771 00:45:22.771 verify_dump=1 00:45:22.771 verify_backlog=512 00:45:22.771 verify_state_save=0 00:45:22.771 do_verify=1 00:45:22.771 verify=crc32c-intel 00:45:22.771 [job0] 00:45:22.771 filename=/dev/nvme0n1 00:45:22.771 [job1] 00:45:22.771 filename=/dev/nvme0n2 00:45:22.771 [job2] 00:45:22.771 filename=/dev/nvme0n3 00:45:22.771 [job3] 00:45:22.771 filename=/dev/nvme0n4 00:45:22.771 Could not set queue depth (nvme0n1) 00:45:22.771 Could not set queue depth (nvme0n2) 00:45:22.771 Could not set queue depth (nvme0n3) 00:45:22.771 Could not set queue depth (nvme0n4) 00:45:22.771 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:45:22.771 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:45:22.771 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:45:22.771 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:45:22.771 fio-3.35 00:45:22.771 Starting 4 threads 00:45:23.709 00:45:23.709 job0: (groupid=0, jobs=1): err= 0: pid=106554: Wed Nov 20 05:56:43 2024 00:45:23.709 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:45:23.710 slat (nsec): min=10332, max=29124, avg=12848.85, stdev=2372.82 00:45:23.710 clat (usec): min=142, max=2013, avg=253.72, stdev=54.38 00:45:23.710 lat (usec): min=155, max=2025, avg=266.57, stdev=54.74 00:45:23.710 clat percentiles (usec): 00:45:23.710 | 1.00th=[ 178], 5.00th=[ 196], 10.00th=[ 206], 20.00th=[ 219], 00:45:23.710 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 253], 60.00th=[ 265], 00:45:23.710 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 314], 00:45:23.710 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 529], 99.95th=[ 644], 00:45:23.710 | 99.99th=[ 2008] 00:45:23.710 write: IOPS=2174, BW=8699KiB/s (8908kB/s)(8708KiB/1001msec); 0 zone resets 00:45:23.710 slat (usec): min=15, max=110, avg=20.90, stdev= 5.96 00:45:23.710 clat (usec): min=109, max=287, avg=185.55, stdev=32.41 00:45:23.710 lat (usec): min=127, max=335, avg=206.45, stdev=34.69 00:45:23.710 clat percentiles (usec): 00:45:23.710 | 1.00th=[ 122], 5.00th=[ 135], 10.00th=[ 145], 20.00th=[ 157], 00:45:23.710 | 30.00th=[ 165], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 196], 00:45:23.710 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 239], 00:45:23.710 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 285], 99.95th=[ 285], 00:45:23.710 | 99.99th=[ 289] 00:45:23.710 bw ( KiB/s): min= 8192, max= 8192, per=28.09%, avg=8192.00, stdev= 0.00, samples=1 00:45:23.710 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:45:23.710 lat (usec) : 250=73.25%, 500=26.67%, 750=0.05% 00:45:23.710 lat (msec) : 4=0.02% 00:45:23.710 cpu : usr=1.10%, sys=5.20%, ctx=4225, majf=0, minf=7 00:45:23.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:23.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:23.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:23.710 issued rwts: total=2048,2177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:23.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:45:23.710 job1: (groupid=0, jobs=1): err= 0: pid=106555: Wed Nov 20 05:56:43 2024 00:45:23.710 read: IOPS=1028, BW=4116KiB/s (4215kB/s)(4120KiB/1001msec) 00:45:23.710 slat (nsec): min=9831, max=32912, avg=15311.16, stdev=3127.80 00:45:23.710 clat (usec): min=216, max=1910, avg=444.28, stdev=91.14 00:45:23.710 lat (usec): min=228, max=1920, avg=459.59, stdev=91.50 00:45:23.710 clat percentiles (usec): 00:45:23.710 | 1.00th=[ 260], 5.00th=[ 338], 10.00th=[ 359], 20.00th=[ 388], 00:45:23.710 | 30.00th=[ 408], 40.00th=[ 420], 50.00th=[ 437], 60.00th=[ 453], 00:45:23.710 | 70.00th=[ 469], 80.00th=[ 490], 90.00th=[ 529], 95.00th=[ 586], 00:45:23.710 | 99.00th=[ 709], 99.50th=[ 734], 99.90th=[ 742], 99.95th=[ 1909], 00:45:23.710 | 99.99th=[ 1909] 00:45:23.710 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:45:23.710 slat (nsec): min=13167, max=80719, avg=24423.90, stdev=7543.95 00:45:23.710 clat (usec): min=160, max=572, avg=316.17, stdev=58.12 00:45:23.710 lat (usec): min=190, max=605, avg=340.59, stdev=58.19 00:45:23.710 clat percentiles (usec): 00:45:23.710 | 1.00th=[ 200], 5.00th=[ 239], 10.00th=[ 249], 20.00th=[ 265], 00:45:23.710 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 310], 60.00th=[ 334], 00:45:23.710 | 70.00th=[ 351], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 412], 00:45:23.710 | 99.00th=[ 453], 99.50th=[ 494], 99.90th=[ 553], 99.95th=[ 570], 00:45:23.710 | 99.99th=[ 570] 00:45:23.710 bw ( KiB/s): min= 6376, max= 6376, per=21.87%, avg=6376.00, stdev= 0.00, samples=1 00:45:23.710 iops : min= 1594, max= 1594, avg=1594.00, stdev= 0.00, samples=1 00:45:23.710 lat (usec) : 250=7.09%, 500=85.70%, 750=7.17% 00:45:23.710 lat (msec) : 2=0.04% 00:45:23.710 cpu : usr=1.10%, sys=3.90%, ctx=2566, majf=0, minf=11 00:45:23.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:23.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:23.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:23.710 issued rwts: total=1030,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:23.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:45:23.710 job2: (groupid=0, jobs=1): err= 0: pid=106556: Wed Nov 20 05:56:43 2024 00:45:23.710 read: IOPS=2015, BW=8064KiB/s (8257kB/s)(8072KiB/1001msec) 00:45:23.710 slat (nsec): min=10530, max=34568, avg=13136.63, stdev=2683.57 00:45:23.710 clat (usec): min=174, max=1019, avg=258.71, stdev=43.05 00:45:23.710 lat (usec): min=186, max=1042, avg=271.84, stdev=43.69 00:45:23.710 clat percentiles (usec): 00:45:23.710 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 227], 00:45:23.710 | 30.00th=[ 237], 40.00th=[ 247], 50.00th=[ 258], 60.00th=[ 265], 00:45:23.710 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 314], 00:45:23.710 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 725], 99.95th=[ 750], 00:45:23.710 | 99.99th=[ 1020] 00:45:23.710 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:45:23.710 slat (nsec): min=15576, max=95765, avg=21877.31, stdev=6012.05 00:45:23.710 clat (usec): min=119, max=2697, avg=196.36, stdev=85.31 00:45:23.710 lat (usec): min=135, max=2725, avg=218.24, stdev=86.49 00:45:23.710 clat percentiles (usec): 00:45:23.710 | 1.00th=[ 135], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 167], 00:45:23.710 | 30.00th=[ 174], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 200], 00:45:23.710 | 70.00th=[ 208], 80.00th=[ 219], 90.00th=[ 233], 95.00th=[ 245], 00:45:23.710 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 1614], 99.95th=[ 1958], 00:45:23.710 | 99.99th=[ 2704] 00:45:23.710 bw ( KiB/s): min= 8192, max= 8192, per=28.09%, avg=8192.00, stdev= 0.00, samples=1 00:45:23.710 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:45:23.710 lat (usec) : 250=70.29%, 500=29.41%, 750=0.15%, 1000=0.02% 00:45:23.710 lat (msec) : 2=0.10%, 4=0.02% 00:45:23.710 cpu : usr=1.40%, sys=4.80%, ctx=4066, majf=0, minf=15 00:45:23.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:23.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:23.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:23.710 issued rwts: total=2018,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:23.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:45:23.710 job3: (groupid=0, jobs=1): err= 0: pid=106557: Wed Nov 20 05:56:43 2024 00:45:23.710 read: IOPS=1028, BW=4116KiB/s (4215kB/s)(4120KiB/1001msec) 00:45:23.710 slat (nsec): min=10020, max=40024, avg=15944.61, stdev=3793.23 00:45:23.710 clat (usec): min=236, max=1897, avg=443.99, stdev=84.07 00:45:23.710 lat (usec): min=248, max=1907, avg=459.93, stdev=84.18 00:45:23.710 clat percentiles (usec): 00:45:23.710 | 1.00th=[ 277], 5.00th=[ 338], 10.00th=[ 363], 20.00th=[ 392], 00:45:23.710 | 30.00th=[ 408], 40.00th=[ 420], 50.00th=[ 437], 60.00th=[ 453], 00:45:23.710 | 70.00th=[ 469], 80.00th=[ 494], 90.00th=[ 529], 95.00th=[ 562], 00:45:23.710 | 99.00th=[ 668], 99.50th=[ 701], 99.90th=[ 750], 99.95th=[ 1893], 00:45:23.710 | 99.99th=[ 1893] 00:45:23.710 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:45:23.710 slat (nsec): min=12437, max=72068, avg=25746.53, stdev=6601.86 00:45:23.710 clat (usec): min=186, max=574, avg=314.79, stdev=56.73 00:45:23.710 lat (usec): min=213, max=588, avg=340.54, stdev=56.54 00:45:23.710 clat percentiles (usec): 00:45:23.710 | 1.00th=[ 217], 5.00th=[ 237], 10.00th=[ 247], 20.00th=[ 262], 00:45:23.710 | 30.00th=[ 273], 40.00th=[ 289], 50.00th=[ 310], 60.00th=[ 330], 00:45:23.710 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 392], 95.00th=[ 412], 00:45:23.710 | 99.00th=[ 445], 99.50th=[ 457], 99.90th=[ 502], 99.95th=[ 578], 00:45:23.710 | 99.99th=[ 578] 00:45:23.710 bw ( KiB/s): min= 6388, max= 6388, per=21.91%, avg=6388.00, stdev= 0.00, samples=1 00:45:23.710 iops : min= 1597, max= 1597, avg=1597.00, stdev= 0.00, samples=1 00:45:23.710 lat (usec) : 250=7.37%, 500=85.23%, 750=7.33%, 1000=0.04% 00:45:23.710 lat (msec) : 2=0.04% 00:45:23.710 cpu : usr=0.90%, sys=4.20%, ctx=2566, majf=0, minf=13 00:45:23.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:23.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:23.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:23.710 issued rwts: total=1030,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:23.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:45:23.710 00:45:23.710 Run status group 0 (all jobs): 00:45:23.710 READ: bw=23.9MiB/s (25.1MB/s), 4116KiB/s-8184KiB/s (4215kB/s-8380kB/s), io=23.9MiB (25.1MB), run=1001-1001msec 00:45:23.710 WRITE: bw=28.5MiB/s (29.9MB/s), 6138KiB/s-8699KiB/s (6285kB/s-8908kB/s), io=28.5MiB (29.9MB), run=1001-1001msec 00:45:23.710 00:45:23.710 Disk stats (read/write): 00:45:23.710 nvme0n1: ios=1627/2048, merge=0/0, ticks=435/395, in_queue=830, util=87.26% 00:45:23.710 nvme0n2: ios=1071/1108, merge=0/0, ticks=511/340, in_queue=851, util=88.73% 00:45:23.710 nvme0n3: ios=1536/1948, merge=0/0, ticks=410/395, in_queue=805, util=88.76% 00:45:23.710 nvme0n4: ios=1024/1108, merge=0/0, ticks=452/356, in_queue=808, util=89.61% 00:45:23.710 05:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:45:23.710 [global] 00:45:23.710 thread=1 00:45:23.710 invalidate=1 00:45:23.710 rw=randwrite 00:45:23.710 time_based=1 00:45:23.710 runtime=1 00:45:23.710 ioengine=libaio 00:45:23.710 direct=1 00:45:23.710 bs=4096 00:45:23.710 iodepth=1 00:45:23.710 norandommap=0 00:45:23.710 numjobs=1 00:45:23.710 00:45:23.710 verify_dump=1 00:45:23.710 verify_backlog=512 00:45:23.710 verify_state_save=0 00:45:23.710 do_verify=1 00:45:23.710 verify=crc32c-intel 00:45:23.710 [job0] 00:45:23.710 filename=/dev/nvme0n1 00:45:23.710 [job1] 00:45:23.710 filename=/dev/nvme0n2 00:45:23.710 [job2] 00:45:23.710 filename=/dev/nvme0n3 00:45:23.710 [job3] 00:45:23.710 filename=/dev/nvme0n4 00:45:23.970 Could not set queue depth (nvme0n1) 00:45:23.970 Could not set queue depth (nvme0n2) 00:45:23.970 Could not set queue depth (nvme0n3) 00:45:23.970 Could not set queue depth (nvme0n4) 00:45:23.970 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:45:23.970 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:45:23.970 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:45:23.970 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:45:23.970 fio-3.35 00:45:23.970 Starting 4 threads 00:45:25.347 00:45:25.347 job0: (groupid=0, jobs=1): err= 0: pid=106611: Wed Nov 20 05:56:44 2024 00:45:25.347 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:45:25.347 slat (nsec): min=8227, max=28306, avg=11475.99, stdev=1488.27 00:45:25.347 clat (usec): min=159, max=500, avg=259.25, stdev=44.54 00:45:25.348 lat (usec): min=172, max=511, avg=270.73, stdev=44.47 00:45:25.348 clat percentiles (usec): 00:45:25.348 | 1.00th=[ 186], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 221], 00:45:25.348 | 30.00th=[ 231], 40.00th=[ 243], 50.00th=[ 258], 60.00th=[ 269], 00:45:25.348 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 334], 00:45:25.348 | 99.00th=[ 383], 99.50th=[ 453], 99.90th=[ 494], 99.95th=[ 498], 00:45:25.348 | 99.99th=[ 502] 00:45:25.348 write: IOPS=2113, BW=8456KiB/s (8658kB/s)(8464KiB/1001msec); 0 zone resets 00:45:25.348 slat (usec): min=9, max=102, avg=18.16, stdev= 4.27 00:45:25.348 clat (usec): min=101, max=1661, avg=190.57, stdev=47.36 00:45:25.348 lat (usec): min=121, max=1678, avg=208.73, stdev=47.86 00:45:25.348 clat percentiles (usec): 00:45:25.348 | 1.00th=[ 130], 5.00th=[ 143], 10.00th=[ 153], 20.00th=[ 163], 00:45:25.348 | 30.00th=[ 169], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 196], 00:45:25.348 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 231], 95.00th=[ 243], 00:45:25.348 | 99.00th=[ 302], 99.50th=[ 326], 99.90th=[ 469], 99.95th=[ 685], 00:45:25.348 | 99.99th=[ 1663] 00:45:25.348 bw ( KiB/s): min= 8742, max= 8742, per=31.03%, avg=8742.00, stdev= 0.00, samples=1 00:45:25.348 iops : min= 2185, max= 2185, avg=2185.00, stdev= 0.00, samples=1 00:45:25.348 lat (usec) : 250=70.73%, 500=29.20%, 750=0.05% 00:45:25.348 lat (msec) : 2=0.02% 00:45:25.348 cpu : usr=1.00%, sys=4.30%, ctx=4165, majf=0, minf=9 00:45:25.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:25.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.348 issued rwts: total=2048,2116,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:45:25.348 job1: (groupid=0, jobs=1): err= 0: pid=106612: Wed Nov 20 05:56:44 2024 00:45:25.348 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:45:25.348 slat (nsec): min=13807, max=49677, avg=20412.35, stdev=4289.08 00:45:25.348 clat (usec): min=182, max=3937, avg=437.82, stdev=147.30 00:45:25.348 lat (usec): min=202, max=3958, avg=458.23, stdev=147.41 00:45:25.348 clat percentiles (usec): 00:45:25.348 | 1.00th=[ 241], 5.00th=[ 293], 10.00th=[ 347], 20.00th=[ 379], 00:45:25.348 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[ 429], 60.00th=[ 441], 00:45:25.348 | 70.00th=[ 457], 80.00th=[ 482], 90.00th=[ 519], 95.00th=[ 586], 00:45:25.348 | 99.00th=[ 693], 99.50th=[ 734], 99.90th=[ 1598], 99.95th=[ 3949], 00:45:25.348 | 99.99th=[ 3949] 00:45:25.348 write: IOPS=1443, BW=5774KiB/s (5913kB/s)(5780KiB/1001msec); 0 zone resets 00:45:25.348 slat (usec): min=21, max=105, avg=35.37, stdev= 6.08 00:45:25.348 clat (usec): min=185, max=599, avg=328.29, stdev=50.08 00:45:25.348 lat (usec): min=219, max=635, avg=363.66, stdev=50.60 00:45:25.348 clat percentiles (usec): 00:45:25.348 | 1.00th=[ 235], 5.00th=[ 253], 10.00th=[ 262], 20.00th=[ 277], 00:45:25.348 | 30.00th=[ 297], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 347], 00:45:25.348 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 408], 00:45:25.348 | 99.00th=[ 441], 99.50th=[ 461], 99.90th=[ 570], 99.95th=[ 603], 00:45:25.348 | 99.99th=[ 603] 00:45:25.348 bw ( KiB/s): min= 5816, max= 5816, per=20.64%, avg=5816.00, stdev= 0.00, samples=1 00:45:25.348 iops : min= 1454, max= 1454, avg=1454.00, stdev= 0.00, samples=1 00:45:25.348 lat (usec) : 250=2.92%, 500=91.17%, 750=5.71%, 1000=0.04% 00:45:25.348 lat (msec) : 2=0.12%, 4=0.04% 00:45:25.348 cpu : usr=1.20%, sys=5.80%, ctx=2470, majf=0, minf=17 00:45:25.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:25.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.348 issued rwts: total=1024,1445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:45:25.348 job2: (groupid=0, jobs=1): err= 0: pid=106613: Wed Nov 20 05:56:44 2024 00:45:25.348 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:45:25.348 slat (nsec): min=15617, max=50221, avg=21193.34, stdev=3824.97 00:45:25.348 clat (usec): min=201, max=3077, avg=436.95, stdev=131.16 00:45:25.348 lat (usec): min=220, max=3094, avg=458.15, stdev=130.99 00:45:25.348 clat percentiles (usec): 00:45:25.348 | 1.00th=[ 269], 5.00th=[ 326], 10.00th=[ 355], 20.00th=[ 383], 00:45:25.348 | 30.00th=[ 408], 40.00th=[ 420], 50.00th=[ 429], 60.00th=[ 441], 00:45:25.348 | 70.00th=[ 457], 80.00th=[ 478], 90.00th=[ 510], 95.00th=[ 537], 00:45:25.348 | 99.00th=[ 693], 99.50th=[ 758], 99.90th=[ 2802], 99.95th=[ 3064], 00:45:25.348 | 99.99th=[ 3064] 00:45:25.348 write: IOPS=1440, BW=5762KiB/s (5901kB/s)(5768KiB/1001msec); 0 zone resets 00:45:25.348 slat (nsec): min=23427, max=93657, avg=35161.01, stdev=5250.54 00:45:25.348 clat (usec): min=198, max=695, avg=329.36, stdev=50.90 00:45:25.348 lat (usec): min=231, max=730, avg=364.52, stdev=51.37 00:45:25.348 clat percentiles (usec): 00:45:25.348 | 1.00th=[ 235], 5.00th=[ 253], 10.00th=[ 265], 20.00th=[ 281], 00:45:25.348 | 30.00th=[ 297], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 343], 00:45:25.348 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 408], 00:45:25.348 | 99.00th=[ 445], 99.50th=[ 461], 99.90th=[ 635], 99.95th=[ 693], 00:45:25.348 | 99.99th=[ 693] 00:45:25.348 bw ( KiB/s): min= 5848, max= 5848, per=20.76%, avg=5848.00, stdev= 0.00, samples=1 00:45:25.348 iops : min= 1462, max= 1462, avg=1462.00, stdev= 0.00, samples=1 00:45:25.348 lat (usec) : 250=2.68%, 500=92.01%, 750=5.07%, 1000=0.16% 00:45:25.348 lat (msec) : 4=0.08% 00:45:25.348 cpu : usr=1.10%, sys=5.70%, ctx=2467, majf=0, minf=17 00:45:25.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:25.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.348 issued rwts: total=1024,1442,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:45:25.348 job3: (groupid=0, jobs=1): err= 0: pid=106614: Wed Nov 20 05:56:44 2024 00:45:25.348 read: IOPS=1926, BW=7704KiB/s (7889kB/s)(7712KiB/1001msec) 00:45:25.348 slat (nsec): min=8370, max=36158, avg=12987.49, stdev=2560.93 00:45:25.348 clat (usec): min=180, max=3997, avg=270.06, stdev=96.08 00:45:25.348 lat (usec): min=193, max=4015, avg=283.05, stdev=95.90 00:45:25.348 clat percentiles (usec): 00:45:25.348 | 1.00th=[ 196], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 231], 00:45:25.348 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 273], 00:45:25.348 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 355], 00:45:25.348 | 99.00th=[ 429], 99.50th=[ 453], 99.90th=[ 570], 99.95th=[ 4015], 00:45:25.348 | 99.99th=[ 4015] 00:45:25.348 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:45:25.348 slat (usec): min=15, max=120, avg=20.65, stdev= 6.10 00:45:25.348 clat (usec): min=119, max=7570, avg=199.10, stdev=240.18 00:45:25.348 lat (usec): min=136, max=7591, avg=219.75, stdev=240.23 00:45:25.348 clat percentiles (usec): 00:45:25.348 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 157], 20.00th=[ 165], 00:45:25.348 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 194], 00:45:25.348 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 227], 95.00th=[ 241], 00:45:25.348 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 2933], 99.95th=[ 7373], 00:45:25.348 | 99.99th=[ 7570] 00:45:25.348 bw ( KiB/s): min= 8464, max= 8464, per=30.04%, avg=8464.00, stdev= 0.00, samples=1 00:45:25.348 iops : min= 2116, max= 2116, avg=2116.00, stdev= 0.00, samples=1 00:45:25.348 lat (usec) : 250=68.23%, 500=31.51%, 750=0.08% 00:45:25.348 lat (msec) : 2=0.08%, 4=0.05%, 10=0.05% 00:45:25.348 cpu : usr=1.00%, sys=4.80%, ctx=3977, majf=0, minf=6 00:45:25.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:25.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.348 issued rwts: total=1928,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:45:25.348 00:45:25.348 Run status group 0 (all jobs): 00:45:25.348 READ: bw=23.5MiB/s (24.6MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=23.5MiB (24.7MB), run=1001-1001msec 00:45:25.348 WRITE: bw=27.5MiB/s (28.9MB/s), 5762KiB/s-8456KiB/s (5901kB/s-8658kB/s), io=27.5MiB (28.9MB), run=1001-1001msec 00:45:25.348 00:45:25.348 Disk stats (read/write): 00:45:25.348 nvme0n1: ios=1661/2048, merge=0/0, ticks=447/407, in_queue=854, util=87.56% 00:45:25.348 nvme0n2: ios=1064/1051, merge=0/0, ticks=478/369, in_queue=847, util=88.11% 00:45:25.348 nvme0n3: ios=1024/1050, merge=0/0, ticks=452/359, in_queue=811, util=89.19% 00:45:25.348 nvme0n4: ios=1536/2006, merge=0/0, ticks=412/400, in_queue=812, util=89.34% 00:45:25.348 05:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:45:25.348 [global] 00:45:25.348 thread=1 00:45:25.348 invalidate=1 00:45:25.348 rw=write 00:45:25.348 time_based=1 00:45:25.348 runtime=1 00:45:25.348 ioengine=libaio 00:45:25.348 direct=1 00:45:25.348 bs=4096 00:45:25.348 iodepth=128 00:45:25.348 norandommap=0 00:45:25.348 numjobs=1 00:45:25.348 00:45:25.348 verify_dump=1 00:45:25.348 verify_backlog=512 00:45:25.348 verify_state_save=0 00:45:25.348 do_verify=1 00:45:25.349 verify=crc32c-intel 00:45:25.349 [job0] 00:45:25.349 filename=/dev/nvme0n1 00:45:25.349 [job1] 00:45:25.349 filename=/dev/nvme0n2 00:45:25.349 [job2] 00:45:25.349 filename=/dev/nvme0n3 00:45:25.349 [job3] 00:45:25.349 filename=/dev/nvme0n4 00:45:25.349 Could not set queue depth (nvme0n1) 00:45:25.349 Could not set queue depth (nvme0n2) 00:45:25.349 Could not set queue depth (nvme0n3) 00:45:25.349 Could not set queue depth (nvme0n4) 00:45:25.349 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:45:25.349 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:45:25.349 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:45:25.349 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:45:25.349 fio-3.35 00:45:25.349 Starting 4 threads 00:45:26.728 00:45:26.728 job0: (groupid=0, jobs=1): err= 0: pid=106669: Wed Nov 20 05:56:46 2024 00:45:26.728 read: IOPS=3489, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1002msec) 00:45:26.728 slat (usec): min=10, max=11591, avg=137.37, stdev=729.48 00:45:26.728 clat (usec): min=1026, max=43815, avg=17036.13, stdev=6834.40 00:45:26.728 lat (usec): min=1042, max=43861, avg=17173.50, stdev=6898.81 00:45:26.728 clat percentiles (usec): 00:45:26.728 | 1.00th=[ 4146], 5.00th=[10028], 10.00th=[10814], 20.00th=[11338], 00:45:26.728 | 30.00th=[11994], 40.00th=[13698], 50.00th=[15270], 60.00th=[16188], 00:45:26.728 | 70.00th=[19268], 80.00th=[22676], 90.00th=[26608], 95.00th=[32113], 00:45:26.728 | 99.00th=[35914], 99.50th=[38011], 99.90th=[38536], 99.95th=[42730], 00:45:26.728 | 99.99th=[43779] 00:45:26.728 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:45:26.728 slat (usec): min=11, max=13103, avg=135.92, stdev=716.89 00:45:26.728 clat (usec): min=7560, max=42994, avg=18645.73, stdev=7411.69 00:45:26.728 lat (usec): min=7580, max=43035, avg=18781.65, stdev=7479.67 00:45:26.728 clat percentiles (usec): 00:45:26.728 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[11076], 20.00th=[11469], 00:45:26.728 | 30.00th=[12387], 40.00th=[14222], 50.00th=[15008], 60.00th=[21890], 00:45:26.728 | 70.00th=[23987], 80.00th=[27132], 90.00th=[27919], 95.00th=[31589], 00:45:26.728 | 99.00th=[34341], 99.50th=[35914], 99.90th=[36963], 99.95th=[39584], 00:45:26.728 | 99.99th=[43254] 00:45:26.728 bw ( KiB/s): min=12288, max=16384, per=36.40%, avg=14336.00, stdev=2896.31, samples=2 00:45:26.728 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:45:26.728 lat (msec) : 2=0.04%, 4=0.34%, 10=3.25%, 20=61.45%, 50=34.92% 00:45:26.728 cpu : usr=4.10%, sys=10.39%, ctx=352, majf=0, minf=11 00:45:26.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:45:26.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:26.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:45:26.728 issued rwts: total=3496,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:26.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:45:26.728 job1: (groupid=0, jobs=1): err= 0: pid=106670: Wed Nov 20 05:56:46 2024 00:45:26.728 read: IOPS=2194, BW=8776KiB/s (8987kB/s)(8820KiB/1005msec) 00:45:26.728 slat (usec): min=8, max=12587, avg=184.47, stdev=995.59 00:45:26.728 clat (usec): min=3982, max=81812, avg=21923.33, stdev=9265.58 00:45:26.728 lat (usec): min=7275, max=81832, avg=22107.80, stdev=9352.10 00:45:26.728 clat percentiles (usec): 00:45:26.728 | 1.00th=[12518], 5.00th=[15533], 10.00th=[16319], 20.00th=[17695], 00:45:26.728 | 30.00th=[18220], 40.00th=[19006], 50.00th=[19792], 60.00th=[21103], 00:45:26.728 | 70.00th=[21890], 80.00th=[23200], 90.00th=[25822], 95.00th=[33817], 00:45:26.728 | 99.00th=[70779], 99.50th=[74974], 99.90th=[81265], 99.95th=[82314], 00:45:26.728 | 99.99th=[82314] 00:45:26.728 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:45:26.728 slat (usec): min=14, max=15709, avg=222.99, stdev=1098.23 00:45:26.728 clat (usec): min=12223, max=76849, avg=30697.90, stdev=16851.85 00:45:26.728 lat (usec): min=12246, max=76881, avg=30920.89, stdev=16962.28 00:45:26.728 clat percentiles (usec): 00:45:26.728 | 1.00th=[13566], 5.00th=[14353], 10.00th=[15139], 20.00th=[17433], 00:45:26.728 | 30.00th=[18220], 40.00th=[21103], 50.00th=[22938], 60.00th=[25560], 00:45:26.728 | 70.00th=[38536], 80.00th=[42730], 90.00th=[61080], 95.00th=[66323], 00:45:26.728 | 99.00th=[72877], 99.50th=[74974], 99.90th=[77071], 99.95th=[77071], 00:45:26.728 | 99.99th=[77071] 00:45:26.728 bw ( KiB/s): min= 8192, max=12263, per=25.97%, avg=10227.50, stdev=2878.63, samples=2 00:45:26.728 iops : min= 2048, max= 3065, avg=2556.50, stdev=719.13, samples=2 00:45:26.728 lat (msec) : 4=0.02%, 10=0.31%, 20=44.11%, 50=45.50%, 100=10.05% 00:45:26.728 cpu : usr=2.59%, sys=7.97%, ctx=292, majf=0, minf=15 00:45:26.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:45:26.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:26.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:45:26.728 issued rwts: total=2205,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:26.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:45:26.728 job2: (groupid=0, jobs=1): err= 0: pid=106671: Wed Nov 20 05:56:46 2024 00:45:26.728 read: IOPS=1346, BW=5385KiB/s (5514kB/s)(5412KiB/1005msec) 00:45:26.728 slat (usec): min=9, max=25772, avg=379.03, stdev=2072.07 00:45:26.728 clat (usec): min=3708, max=85828, avg=43921.47, stdev=18183.50 00:45:26.728 lat (usec): min=7346, max=85847, avg=44300.50, stdev=18235.35 00:45:26.728 clat percentiles (usec): 00:45:26.728 | 1.00th=[10683], 5.00th=[23987], 10.00th=[24773], 20.00th=[25822], 00:45:26.728 | 30.00th=[26346], 40.00th=[33817], 50.00th=[46400], 60.00th=[50070], 00:45:26.728 | 70.00th=[54789], 80.00th=[61080], 90.00th=[68682], 95.00th=[76022], 00:45:26.728 | 99.00th=[85459], 99.50th=[85459], 99.90th=[85459], 99.95th=[85459], 00:45:26.728 | 99.99th=[85459] 00:45:26.728 write: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec); 0 zone resets 00:45:26.728 slat (usec): min=15, max=20579, avg=306.94, stdev=1710.69 00:45:26.728 clat (usec): min=17544, max=84329, avg=43134.94, stdev=16298.78 00:45:26.728 lat (usec): min=22468, max=84358, avg=43441.88, stdev=16291.40 00:45:26.728 clat percentiles (usec): 00:45:26.728 | 1.00th=[21365], 5.00th=[22676], 10.00th=[22938], 20.00th=[23987], 00:45:26.728 | 30.00th=[26084], 40.00th=[39584], 50.00th=[44303], 60.00th=[49546], 00:45:26.728 | 70.00th=[53740], 80.00th=[56361], 90.00th=[61604], 95.00th=[71828], 00:45:26.728 | 99.00th=[84411], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:45:26.728 | 99.99th=[84411] 00:45:26.728 bw ( KiB/s): min= 4096, max= 8175, per=15.58%, avg=6135.50, stdev=2884.29, samples=2 00:45:26.728 iops : min= 1024, max= 2043, avg=1533.50, stdev=720.54, samples=2 00:45:26.728 lat (msec) : 4=0.03%, 10=0.28%, 20=1.11%, 50=57.36%, 100=41.23% 00:45:26.728 cpu : usr=1.69%, sys=5.58%, ctx=122, majf=0, minf=15 00:45:26.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:45:26.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:26.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:45:26.728 issued rwts: total=1353,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:26.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:45:26.728 job3: (groupid=0, jobs=1): err= 0: pid=106672: Wed Nov 20 05:56:46 2024 00:45:26.728 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:45:26.728 slat (usec): min=8, max=15000, avg=215.36, stdev=1246.71 00:45:26.728 clat (usec): min=17870, max=46223, avg=27941.83, stdev=4825.14 00:45:26.728 lat (usec): min=17895, max=46245, avg=28157.19, stdev=4933.44 00:45:26.728 clat percentiles (usec): 00:45:26.728 | 1.00th=[17957], 5.00th=[20055], 10.00th=[22152], 20.00th=[24511], 00:45:26.728 | 30.00th=[25035], 40.00th=[26346], 50.00th=[26870], 60.00th=[27919], 00:45:26.728 | 70.00th=[29754], 80.00th=[32375], 90.00th=[34866], 95.00th=[36963], 00:45:26.728 | 99.00th=[41157], 99.50th=[41681], 99.90th=[46400], 99.95th=[46400], 00:45:26.729 | 99.99th=[46400] 00:45:26.729 write: IOPS=2204, BW=8820KiB/s (9032kB/s)(8864KiB/1005msec); 0 zone resets 00:45:26.729 slat (usec): min=17, max=17515, avg=243.11, stdev=1307.44 00:45:26.729 clat (usec): min=259, max=62390, avg=31184.54, stdev=12464.42 00:45:26.729 lat (usec): min=5165, max=62423, avg=31427.66, stdev=12578.94 00:45:26.729 clat percentiles (usec): 00:45:26.729 | 1.00th=[ 5997], 5.00th=[19530], 10.00th=[20579], 20.00th=[21103], 00:45:26.729 | 30.00th=[22676], 40.00th=[25035], 50.00th=[28443], 60.00th=[30540], 00:45:26.729 | 70.00th=[33162], 80.00th=[45351], 90.00th=[52167], 95.00th=[54264], 00:45:26.729 | 99.00th=[57410], 99.50th=[57410], 99.90th=[62129], 99.95th=[62129], 00:45:26.729 | 99.99th=[62653] 00:45:26.729 bw ( KiB/s): min= 8208, max= 8512, per=21.23%, avg=8360.00, stdev=214.96, samples=2 00:45:26.729 iops : min= 2052, max= 2128, avg=2090.00, stdev=53.74, samples=2 00:45:26.729 lat (usec) : 500=0.02% 00:45:26.729 lat (msec) : 10=1.48%, 20=4.90%, 50=84.26%, 100=9.33% 00:45:26.729 cpu : usr=2.79%, sys=7.97%, ctx=164, majf=0, minf=11 00:45:26.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:45:26.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:26.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:45:26.729 issued rwts: total=2048,2216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:26.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:45:26.729 00:45:26.729 Run status group 0 (all jobs): 00:45:26.729 READ: bw=35.4MiB/s (37.1MB/s), 5385KiB/s-13.6MiB/s (5514kB/s-14.3MB/s), io=35.6MiB (37.3MB), run=1002-1005msec 00:45:26.729 WRITE: bw=38.5MiB/s (40.3MB/s), 6113KiB/s-14.0MiB/s (6260kB/s-14.7MB/s), io=38.7MiB (40.5MB), run=1002-1005msec 00:45:26.729 00:45:26.729 Disk stats (read/write): 00:45:26.729 nvme0n1: ios=2610/2879, merge=0/0, ticks=15990/16981, in_queue=32971, util=87.36% 00:45:26.729 nvme0n2: ios=2097/2414, merge=0/0, ticks=19438/28093, in_queue=47531, util=87.47% 00:45:26.729 nvme0n3: ios=1056/1536, merge=0/0, ticks=10781/14384, in_queue=25165, util=88.65% 00:45:26.729 nvme0n4: ios=1536/2026, merge=0/0, ticks=19746/30136, in_queue=49882, util=89.61% 00:45:26.729 05:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:45:26.729 [global] 00:45:26.729 thread=1 00:45:26.729 invalidate=1 00:45:26.729 rw=randwrite 00:45:26.729 time_based=1 00:45:26.729 runtime=1 00:45:26.729 ioengine=libaio 00:45:26.729 direct=1 00:45:26.729 bs=4096 00:45:26.729 iodepth=128 00:45:26.729 norandommap=0 00:45:26.729 numjobs=1 00:45:26.729 00:45:26.729 verify_dump=1 00:45:26.729 verify_backlog=512 00:45:26.729 verify_state_save=0 00:45:26.729 do_verify=1 00:45:26.729 verify=crc32c-intel 00:45:26.729 [job0] 00:45:26.729 filename=/dev/nvme0n1 00:45:26.729 [job1] 00:45:26.729 filename=/dev/nvme0n2 00:45:26.729 [job2] 00:45:26.729 filename=/dev/nvme0n3 00:45:26.729 [job3] 00:45:26.729 filename=/dev/nvme0n4 00:45:26.729 Could not set queue depth (nvme0n1) 00:45:26.729 Could not set queue depth (nvme0n2) 00:45:26.729 Could not set queue depth (nvme0n3) 00:45:26.729 Could not set queue depth (nvme0n4) 00:45:26.729 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:45:26.729 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:45:26.729 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:45:26.729 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:45:26.729 fio-3.35 00:45:26.729 Starting 4 threads 00:45:28.109 00:45:28.109 job0: (groupid=0, jobs=1): err= 0: pid=106732: Wed Nov 20 05:56:47 2024 00:45:28.109 read: IOPS=2003, BW=8016KiB/s (8208kB/s)(8192KiB/1022msec) 00:45:28.109 slat (usec): min=6, max=18522, avg=242.35, stdev=1466.70 00:45:28.109 clat (msec): min=9, max=107, avg=24.48, stdev=15.52 00:45:28.109 lat (msec): min=9, max=107, avg=24.72, stdev=15.75 00:45:28.109 clat percentiles (msec): 00:45:28.109 | 1.00th=[ 10], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 17], 00:45:28.109 | 30.00th=[ 17], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 21], 00:45:28.109 | 70.00th=[ 25], 80.00th=[ 29], 90.00th=[ 45], 95.00th=[ 54], 00:45:28.109 | 99.00th=[ 92], 99.50th=[ 93], 99.90th=[ 108], 99.95th=[ 108], 00:45:28.109 | 99.99th=[ 108] 00:45:28.109 write: IOPS=2354, BW=9417KiB/s (9643kB/s)(9624KiB/1022msec); 0 zone resets 00:45:28.109 slat (usec): min=6, max=41323, avg=200.21, stdev=1366.05 00:45:28.109 clat (msec): min=5, max=107, avg=32.95, stdev=19.37 00:45:28.109 lat (msec): min=5, max=107, avg=33.15, stdev=19.45 00:45:28.109 clat percentiles (msec): 00:45:28.109 | 1.00th=[ 9], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:45:28.109 | 30.00th=[ 21], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 34], 00:45:28.109 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 51], 95.00th=[ 91], 00:45:28.109 | 99.00th=[ 94], 99.50th=[ 94], 99.90th=[ 107], 99.95th=[ 108], 00:45:28.109 | 99.99th=[ 108] 00:45:28.109 bw ( KiB/s): min= 7240, max=11014, per=23.20%, avg=9127.00, stdev=2668.62, samples=2 00:45:28.109 iops : min= 1810, max= 2753, avg=2281.50, stdev=666.80, samples=2 00:45:28.109 lat (msec) : 10=1.71%, 20=41.31%, 50=48.52%, 100=8.15%, 250=0.31% 00:45:28.109 cpu : usr=2.55%, sys=6.46%, ctx=279, majf=0, minf=15 00:45:28.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:45:28.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:28.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:45:28.109 issued rwts: total=2048,2406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:28.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:45:28.109 job1: (groupid=0, jobs=1): err= 0: pid=106733: Wed Nov 20 05:56:47 2024 00:45:28.109 read: IOPS=2063, BW=8254KiB/s (8452kB/s)(8328KiB/1009msec) 00:45:28.109 slat (usec): min=8, max=29474, avg=224.05, stdev=1408.00 00:45:28.109 clat (usec): min=7035, max=77594, avg=24691.40, stdev=12396.47 00:45:28.109 lat (usec): min=10725, max=77616, avg=24915.45, stdev=12531.97 00:45:28.109 clat percentiles (usec): 00:45:28.109 | 1.00th=[11469], 5.00th=[13173], 10.00th=[13960], 20.00th=[14615], 00:45:28.109 | 30.00th=[16057], 40.00th=[20317], 50.00th=[22938], 60.00th=[23987], 00:45:28.109 | 70.00th=[25822], 80.00th=[27657], 90.00th=[43254], 95.00th=[57410], 00:45:28.109 | 99.00th=[68682], 99.50th=[68682], 99.90th=[77071], 99.95th=[77071], 00:45:28.109 | 99.99th=[77071] 00:45:28.109 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:45:28.109 slat (usec): min=6, max=29865, avg=200.51, stdev=1261.13 00:45:28.109 clat (usec): min=6495, max=80722, avg=29777.73, stdev=12223.94 00:45:28.109 lat (usec): min=6523, max=80784, avg=29978.24, stdev=12317.67 00:45:28.109 clat percentiles (usec): 00:45:28.109 | 1.00th=[12780], 5.00th=[16712], 10.00th=[19792], 20.00th=[21890], 00:45:28.109 | 30.00th=[23200], 40.00th=[24511], 50.00th=[25035], 60.00th=[25560], 00:45:28.109 | 70.00th=[32637], 80.00th=[37487], 90.00th=[50070], 95.00th=[51119], 00:45:28.109 | 99.00th=[69731], 99.50th=[72877], 99.90th=[73925], 99.95th=[77071], 00:45:28.109 | 99.99th=[80217] 00:45:28.109 bw ( KiB/s): min= 8175, max=11544, per=25.06%, avg=9859.50, stdev=2382.24, samples=2 00:45:28.109 iops : min= 2043, max= 2886, avg=2464.50, stdev=596.09, samples=2 00:45:28.109 lat (msec) : 10=0.34%, 20=22.55%, 50=69.63%, 100=7.48% 00:45:28.109 cpu : usr=2.28%, sys=7.54%, ctx=412, majf=0, minf=11 00:45:28.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:45:28.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:28.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:45:28.109 issued rwts: total=2082,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:28.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:45:28.109 job2: (groupid=0, jobs=1): err= 0: pid=106734: Wed Nov 20 05:56:47 2024 00:45:28.109 read: IOPS=2837, BW=11.1MiB/s (11.6MB/s)(11.2MiB/1008msec) 00:45:28.109 slat (usec): min=4, max=14621, avg=164.23, stdev=1074.09 00:45:28.109 clat (usec): min=5311, max=39751, avg=20764.86, stdev=5593.15 00:45:28.109 lat (usec): min=7396, max=39772, avg=20929.09, stdev=5665.79 00:45:28.109 clat percentiles (usec): 00:45:28.109 | 1.00th=[11863], 5.00th=[13698], 10.00th=[14091], 20.00th=[14746], 00:45:28.109 | 30.00th=[17433], 40.00th=[18482], 50.00th=[20055], 60.00th=[21627], 00:45:28.109 | 70.00th=[24511], 80.00th=[26084], 90.00th=[27919], 95.00th=[30016], 00:45:28.109 | 99.00th=[34341], 99.50th=[35914], 99.90th=[36963], 99.95th=[37487], 00:45:28.109 | 99.99th=[39584] 00:45:28.109 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:45:28.109 slat (usec): min=6, max=13977, avg=164.61, stdev=833.11 00:45:28.109 clat (usec): min=6424, max=58234, avg=22215.86, stdev=7612.96 00:45:28.109 lat (usec): min=6445, max=58248, avg=22380.48, stdev=7673.71 00:45:28.109 clat percentiles (usec): 00:45:28.109 | 1.00th=[ 9765], 5.00th=[11863], 10.00th=[14222], 20.00th=[16581], 00:45:28.109 | 30.00th=[17433], 40.00th=[20055], 50.00th=[22676], 60.00th=[23987], 00:45:28.109 | 70.00th=[24773], 80.00th=[25297], 90.00th=[30016], 95.00th=[36963], 00:45:28.109 | 99.00th=[51119], 99.50th=[53216], 99.90th=[58459], 99.95th=[58459], 00:45:28.109 | 99.99th=[58459] 00:45:28.109 bw ( KiB/s): min=11408, max=13194, per=31.27%, avg=12301.00, stdev=1262.89, samples=2 00:45:28.109 iops : min= 2852, max= 3298, avg=3075.00, stdev=315.37, samples=2 00:45:28.109 lat (msec) : 10=0.94%, 20=42.87%, 50=55.53%, 100=0.66% 00:45:28.109 cpu : usr=2.48%, sys=8.84%, ctx=481, majf=0, minf=13 00:45:28.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:45:28.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:28.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:45:28.109 issued rwts: total=2860,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:28.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:45:28.109 job3: (groupid=0, jobs=1): err= 0: pid=106735: Wed Nov 20 05:56:47 2024 00:45:28.109 read: IOPS=1502, BW=6012KiB/s (6156kB/s)(6144KiB/1022msec) 00:45:28.109 slat (usec): min=9, max=27962, avg=216.58, stdev=1544.49 00:45:28.109 clat (usec): min=7563, max=51270, avg=26318.87, stdev=10404.28 00:45:28.109 lat (usec): min=7580, max=51313, avg=26535.46, stdev=10469.73 00:45:28.109 clat percentiles (usec): 00:45:28.109 | 1.00th=[ 8225], 5.00th=[13566], 10.00th=[17695], 20.00th=[18220], 00:45:28.109 | 30.00th=[18482], 40.00th=[19006], 50.00th=[20055], 60.00th=[26870], 00:45:28.110 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[46400], 00:45:28.110 | 99.00th=[47973], 99.50th=[49021], 99.90th=[49546], 99.95th=[51119], 00:45:28.110 | 99.99th=[51119] 00:45:28.110 write: IOPS=1970, BW=7883KiB/s (8072kB/s)(8056KiB/1022msec); 0 zone resets 00:45:28.110 slat (usec): min=6, max=30430, avg=321.82, stdev=1684.65 00:45:28.110 clat (msec): min=5, max=140, avg=44.17, stdev=26.73 00:45:28.110 lat (msec): min=5, max=140, avg=44.49, stdev=26.88 00:45:28.110 clat percentiles (msec): 00:45:28.110 | 1.00th=[ 9], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 32], 00:45:28.110 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 36], 00:45:28.110 | 70.00th=[ 41], 80.00th=[ 62], 90.00th=[ 86], 95.00th=[ 100], 00:45:28.110 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 142], 00:45:28.110 | 99.99th=[ 142] 00:45:28.110 bw ( KiB/s): min= 7240, max= 7871, per=19.20%, avg=7555.50, stdev=446.18, samples=2 00:45:28.110 iops : min= 1810, max= 1967, avg=1888.50, stdev=111.02, samples=2 00:45:28.110 lat (msec) : 10=1.75%, 20=26.56%, 50=57.75%, 100=11.27%, 250=2.68% 00:45:28.110 cpu : usr=2.35%, sys=5.58%, ctx=235, majf=0, minf=13 00:45:28.110 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:45:28.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:28.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:45:28.110 issued rwts: total=1536,2014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:28.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:45:28.110 00:45:28.110 Run status group 0 (all jobs): 00:45:28.110 READ: bw=32.6MiB/s (34.2MB/s), 6012KiB/s-11.1MiB/s (6156kB/s-11.6MB/s), io=33.3MiB (34.9MB), run=1008-1022msec 00:45:28.110 WRITE: bw=38.4MiB/s (40.3MB/s), 7883KiB/s-11.9MiB/s (8072kB/s-12.5MB/s), io=39.3MiB (41.2MB), run=1008-1022msec 00:45:28.110 00:45:28.110 Disk stats (read/write): 00:45:28.110 nvme0n1: ios=1586/1991, merge=0/0, ticks=36137/61200, in_queue=97337, util=86.69% 00:45:28.110 nvme0n2: ios=1737/2048, merge=0/0, ticks=22029/29007, in_queue=51036, util=88.56% 00:45:28.110 nvme0n3: ios=2512/2560, merge=0/0, ticks=39726/48614, in_queue=88340, util=88.74% 00:45:28.110 nvme0n4: ios=1487/1536, merge=0/0, ticks=38295/66244, in_queue=104539, util=89.59% 00:45:28.110 05:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:45:28.110 05:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=106749 00:45:28.110 05:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:45:28.110 05:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:45:28.110 [global] 00:45:28.110 thread=1 00:45:28.110 invalidate=1 00:45:28.110 rw=read 00:45:28.110 time_based=1 00:45:28.110 runtime=10 00:45:28.110 ioengine=libaio 00:45:28.110 direct=1 00:45:28.110 bs=4096 00:45:28.110 iodepth=1 00:45:28.110 norandommap=1 00:45:28.110 numjobs=1 00:45:28.110 00:45:28.110 [job0] 00:45:28.110 filename=/dev/nvme0n1 00:45:28.110 [job1] 00:45:28.110 filename=/dev/nvme0n2 00:45:28.110 [job2] 00:45:28.110 filename=/dev/nvme0n3 00:45:28.110 [job3] 00:45:28.110 filename=/dev/nvme0n4 00:45:28.110 Could not set queue depth (nvme0n1) 00:45:28.110 Could not set queue depth (nvme0n2) 00:45:28.110 Could not set queue depth (nvme0n3) 00:45:28.110 Could not set queue depth (nvme0n4) 00:45:28.369 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:45:28.369 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:45:28.369 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:45:28.369 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:45:28.369 fio-3.35 00:45:28.369 Starting 4 threads 00:45:31.659 05:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:45:31.659 fio: pid=106792, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:45:31.659 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=33865728, buflen=4096 00:45:31.659 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:45:31.659 fio: pid=106791, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:45:31.659 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=36253696, buflen=4096 00:45:31.659 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:45:31.659 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:45:31.659 fio: pid=106789, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:45:31.659 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=58294272, buflen=4096 00:45:31.659 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:45:31.659 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:45:31.919 fio: pid=106790, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:45:31.919 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=41046016, buflen=4096 00:45:31.919 00:45:31.919 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106789: Wed Nov 20 05:56:51 2024 00:45:31.919 read: IOPS=4302, BW=16.8MiB/s (17.6MB/s)(55.6MiB/3308msec) 00:45:31.919 slat (usec): min=10, max=15578, avg=15.23, stdev=182.56 00:45:31.919 clat (usec): min=2, max=2129, avg=216.50, stdev=62.62 00:45:31.919 lat (usec): min=128, max=15839, avg=231.73, stdev=193.59 00:45:31.919 clat percentiles (usec): 00:45:31.919 | 1.00th=[ 149], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 178], 00:45:31.919 | 30.00th=[ 186], 40.00th=[ 196], 50.00th=[ 206], 60.00th=[ 217], 00:45:31.919 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 255], 95.00th=[ 334], 00:45:31.919 | 99.00th=[ 449], 99.50th=[ 510], 99.90th=[ 570], 99.95th=[ 734], 00:45:31.919 | 99.99th=[ 2114] 00:45:31.919 bw ( KiB/s): min=11392, max=19856, per=36.90%, avg=17160.00, stdev=3212.89, samples=6 00:45:31.919 iops : min= 2848, max= 4964, avg=4290.00, stdev=803.22, samples=6 00:45:31.919 lat (usec) : 4=0.01%, 250=88.39%, 500=11.02%, 750=0.52%, 1000=0.02% 00:45:31.919 lat (msec) : 2=0.01%, 4=0.01% 00:45:31.919 cpu : usr=0.60%, sys=4.11%, ctx=14237, majf=0, minf=1 00:45:31.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:31.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.919 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.919 issued rwts: total=14233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:45:31.919 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106790: Wed Nov 20 05:56:51 2024 00:45:31.919 read: IOPS=2815, BW=11.0MiB/s (11.5MB/s)(39.1MiB/3559msec) 00:45:31.919 slat (usec): min=13, max=12635, avg=23.96, stdev=200.53 00:45:31.919 clat (usec): min=4, max=37545, avg=329.55, stdev=386.52 00:45:31.919 lat (usec): min=164, max=37560, avg=353.52, stdev=435.81 00:45:31.919 clat percentiles (usec): 00:45:31.919 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 196], 20.00th=[ 277], 00:45:31.919 | 30.00th=[ 293], 40.00th=[ 310], 50.00th=[ 330], 60.00th=[ 351], 00:45:31.919 | 70.00th=[ 371], 80.00th=[ 383], 90.00th=[ 400], 95.00th=[ 412], 00:45:31.919 | 99.00th=[ 441], 99.50th=[ 453], 99.90th=[ 1020], 99.95th=[ 3195], 00:45:31.919 | 99.99th=[ 4228] 00:45:31.919 bw ( KiB/s): min= 9480, max=12384, per=23.42%, avg=10890.67, stdev=1394.65, samples=6 00:45:31.919 iops : min= 2370, max= 3096, avg=2722.67, stdev=348.66, samples=6 00:45:31.919 lat (usec) : 10=0.01%, 250=11.37%, 500=88.34%, 750=0.10%, 1000=0.05% 00:45:31.919 lat (msec) : 2=0.04%, 4=0.06%, 10=0.01%, 50=0.01% 00:45:31.919 cpu : usr=0.70%, sys=4.61%, ctx=10034, majf=0, minf=2 00:45:31.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:31.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.919 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.919 issued rwts: total=10022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:45:31.919 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106791: Wed Nov 20 05:56:51 2024 00:45:31.919 read: IOPS=2849, BW=11.1MiB/s (11.7MB/s)(34.6MiB/3106msec) 00:45:31.919 slat (usec): min=6, max=12854, avg=14.52, stdev=182.78 00:45:31.919 clat (usec): min=157, max=4034, avg=335.32, stdev=119.72 00:45:31.919 lat (usec): min=173, max=13087, avg=349.84, stdev=217.50 00:45:31.919 clat percentiles (usec): 00:45:31.919 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 210], 20.00th=[ 273], 00:45:31.919 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 347], 00:45:31.919 | 70.00th=[ 379], 80.00th=[ 404], 90.00th=[ 433], 95.00th=[ 461], 00:45:31.919 | 99.00th=[ 570], 99.50th=[ 611], 99.90th=[ 1205], 99.95th=[ 3425], 00:45:31.919 | 99.99th=[ 4047] 00:45:31.919 bw ( KiB/s): min= 9696, max=12520, per=24.11%, avg=11212.00, stdev=1374.16, samples=6 00:45:31.919 iops : min= 2424, max= 3130, avg=2803.00, stdev=343.54, samples=6 00:45:31.919 lat (usec) : 250=17.50%, 500=79.10%, 750=3.19%, 1000=0.08% 00:45:31.919 lat (msec) : 2=0.06%, 4=0.06%, 10=0.01% 00:45:31.919 cpu : usr=0.64%, sys=2.90%, ctx=8856, majf=0, minf=2 00:45:31.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:31.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.919 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.919 issued rwts: total=8852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:45:31.919 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=106792: Wed Nov 20 05:56:51 2024 00:45:31.919 read: IOPS=2856, BW=11.2MiB/s (11.7MB/s)(32.3MiB/2895msec) 00:45:31.919 slat (nsec): min=7132, max=76618, avg=11573.17, stdev=3477.47 00:45:31.919 clat (usec): min=166, max=2668, avg=337.32, stdev=84.18 00:45:31.919 lat (usec): min=175, max=2683, avg=348.89, stdev=84.62 00:45:31.919 clat percentiles (usec): 00:45:31.919 | 1.00th=[ 192], 5.00th=[ 212], 10.00th=[ 258], 20.00th=[ 289], 00:45:31.919 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 338], 00:45:31.919 | 70.00th=[ 375], 80.00th=[ 400], 90.00th=[ 429], 95.00th=[ 449], 00:45:31.919 | 99.00th=[ 562], 99.50th=[ 594], 99.90th=[ 775], 99.95th=[ 922], 00:45:31.919 | 99.99th=[ 2671] 00:45:31.919 bw ( KiB/s): min= 9696, max=12496, per=24.32%, avg=11308.80, stdev=1377.31, samples=5 00:45:31.919 iops : min= 2424, max= 3124, avg=2827.20, stdev=344.33, samples=5 00:45:31.919 lat (usec) : 250=9.44%, 500=88.11%, 750=2.32%, 1000=0.06% 00:45:31.919 lat (msec) : 2=0.01%, 4=0.04% 00:45:31.919 cpu : usr=0.48%, sys=3.08%, ctx=8271, majf=0, minf=2 00:45:31.919 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:31.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.919 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:31.919 issued rwts: total=8269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:31.919 latency : target=0, window=0, percentile=100.00%, depth=1 00:45:31.919 00:45:31.919 Run status group 0 (all jobs): 00:45:31.919 READ: bw=45.4MiB/s (47.6MB/s), 11.0MiB/s-16.8MiB/s (11.5MB/s-17.6MB/s), io=162MiB (169MB), run=2895-3559msec 00:45:31.919 00:45:31.919 Disk stats (read/write): 00:45:31.919 nvme0n1: ios=13350/0, merge=0/0, ticks=2972/0, in_queue=2972, util=95.16% 00:45:31.919 nvme0n2: ios=9038/0, merge=0/0, ticks=3168/0, in_queue=3168, util=95.18% 00:45:31.919 nvme0n3: ios=7948/0, merge=0/0, ticks=2720/0, in_queue=2720, util=96.32% 00:45:31.919 nvme0n4: ios=8172/0, merge=0/0, ticks=2718/0, in_queue=2718, util=96.75% 00:45:31.919 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:45:31.920 05:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:45:32.179 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:45:32.179 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:45:32.438 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:45:32.438 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:45:32.698 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:45:32.698 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:45:32.957 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:45:32.957 05:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:45:33.216 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:45:33.216 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 106749 00:45:33.216 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:45:33.216 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:45:33.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:45:33.217 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:45:33.217 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:45:33.217 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:45:33.217 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:33.217 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:45:33.217 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:33.476 nvmf hotplug test: fio failed as expected 00:45:33.476 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:45:33.476 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:45:33.476 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:45:33.476 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:33.476 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:45:33.476 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:45:33.476 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:45:33.476 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:45:33.476 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:45:33.476 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:33.476 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:45:33.476 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:33.476 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:45:33.476 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:33.476 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:33.476 rmmod nvme_tcp 00:45:33.476 rmmod nvme_fabrics 00:45:33.735 rmmod nvme_keyring 00:45:33.735 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:33.735 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:45:33.735 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:45:33.735 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 106256 ']' 00:45:33.735 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 106256 00:45:33.736 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 106256 ']' 00:45:33.736 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 106256 00:45:33.736 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:45:33.736 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:33.736 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 106256 00:45:33.736 killing process with pid 106256 00:45:33.736 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:45:33.736 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:45:33.736 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 106256' 00:45:33.736 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 106256 00:45:33.736 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 106256 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:45:33.995 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:45:34.254 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:45:34.254 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:34.254 05:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:34.254 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:45:34.254 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:34.254 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:34.254 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:34.254 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:45:34.254 00:45:34.254 real 0m20.558s 00:45:34.254 user 0m59.502s 00:45:34.254 sys 0m10.430s 00:45:34.254 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:34.254 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:45:34.254 ************************************ 00:45:34.254 END TEST nvmf_fio_target 00:45:34.254 ************************************ 00:45:34.254 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:45:34.254 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:45:34.254 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:45:34.254 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:34.254 ************************************ 00:45:34.254 START TEST nvmf_bdevio 00:45:34.254 ************************************ 00:45:34.254 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:45:34.515 * Looking for test storage... 00:45:34.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:34.515 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:34.515 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:45:34.515 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:34.515 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:34.515 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:34.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:34.516 --rc genhtml_branch_coverage=1 00:45:34.516 --rc genhtml_function_coverage=1 00:45:34.516 --rc genhtml_legend=1 00:45:34.516 --rc geninfo_all_blocks=1 00:45:34.516 --rc geninfo_unexecuted_blocks=1 00:45:34.516 00:45:34.516 ' 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:34.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:34.516 --rc genhtml_branch_coverage=1 00:45:34.516 --rc genhtml_function_coverage=1 00:45:34.516 --rc genhtml_legend=1 00:45:34.516 --rc geninfo_all_blocks=1 00:45:34.516 --rc geninfo_unexecuted_blocks=1 00:45:34.516 00:45:34.516 ' 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:34.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:34.516 --rc genhtml_branch_coverage=1 00:45:34.516 --rc genhtml_function_coverage=1 00:45:34.516 --rc genhtml_legend=1 00:45:34.516 --rc geninfo_all_blocks=1 00:45:34.516 --rc geninfo_unexecuted_blocks=1 00:45:34.516 00:45:34.516 ' 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:34.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:34.516 --rc genhtml_branch_coverage=1 00:45:34.516 --rc genhtml_function_coverage=1 00:45:34.516 --rc genhtml_legend=1 00:45:34.516 --rc geninfo_all_blocks=1 00:45:34.516 --rc geninfo_unexecuted_blocks=1 00:45:34.516 00:45:34.516 ' 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:34.516 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:45:34.517 Cannot find device "nvmf_init_br" 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:45:34.517 Cannot find device "nvmf_init_br2" 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:45:34.517 Cannot find device "nvmf_tgt_br" 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:45:34.517 Cannot find device "nvmf_tgt_br2" 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:45:34.517 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:45:34.777 Cannot find device "nvmf_init_br" 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:45:34.777 Cannot find device "nvmf_init_br2" 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:45:34.777 Cannot find device "nvmf_tgt_br" 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:45:34.777 Cannot find device "nvmf_tgt_br2" 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:45:34.777 Cannot find device "nvmf_br" 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:45:34.777 Cannot find device "nvmf_init_if" 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:45:34.777 Cannot find device "nvmf_init_if2" 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:34.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:34.777 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:34.777 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:35.037 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:45:35.037 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:45:35.037 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:45:35.037 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:45:35.037 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:35.037 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:35.037 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:35.037 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:45:35.037 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:45:35.037 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:45:35.037 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:35.037 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:45:35.037 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:45:35.037 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:35.037 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:45:35.037 00:45:35.037 --- 10.0.0.3 ping statistics --- 00:45:35.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:35.037 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:45:35.037 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:45:35.037 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:45:35.037 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:45:35.037 00:45:35.037 --- 10.0.0.4 ping statistics --- 00:45:35.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:35.037 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:45:35.037 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:35.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:35.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:45:35.037 00:45:35.037 --- 10.0.0.1 ping statistics --- 00:45:35.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:35.038 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:45:35.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:35.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:45:35.038 00:45:35.038 --- 10.0.0.2 ping statistics --- 00:45:35.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:35.038 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=107170 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 107170 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 107170 ']' 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:35.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:35.038 05:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:35.038 [2024-11-20 05:56:54.910845] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:35.038 [2024-11-20 05:56:54.912245] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:45:35.038 [2024-11-20 05:56:54.912308] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:35.296 [2024-11-20 05:56:55.058487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:35.296 [2024-11-20 05:56:55.100971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:35.296 [2024-11-20 05:56:55.101023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:35.296 [2024-11-20 05:56:55.101032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:35.296 [2024-11-20 05:56:55.101041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:35.296 [2024-11-20 05:56:55.101047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:35.296 [2024-11-20 05:56:55.101850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:45:35.296 [2024-11-20 05:56:55.102022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:45:35.296 [2024-11-20 05:56:55.102201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:45:35.296 [2024-11-20 05:56:55.102201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:45:35.296 [2024-11-20 05:56:55.172136] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:35.297 [2024-11-20 05:56:55.172445] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:35.297 [2024-11-20 05:56:55.173800] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:45:35.297 [2024-11-20 05:56:55.174055] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:45:35.297 [2024-11-20 05:56:55.174130] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:36.233 [2024-11-20 05:56:55.923210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:36.233 Malloc0 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:36.233 05:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:36.233 [2024-11-20 05:56:55.999238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:36.233 05:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:36.233 05:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:45:36.233 05:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:45:36.233 05:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:45:36.233 05:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:45:36.233 05:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:36.233 05:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:36.233 { 00:45:36.233 "params": { 00:45:36.233 "name": "Nvme$subsystem", 00:45:36.233 "trtype": "$TEST_TRANSPORT", 00:45:36.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:36.233 "adrfam": "ipv4", 00:45:36.233 "trsvcid": "$NVMF_PORT", 00:45:36.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:36.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:36.233 "hdgst": ${hdgst:-false}, 00:45:36.233 "ddgst": ${ddgst:-false} 00:45:36.233 }, 00:45:36.233 "method": "bdev_nvme_attach_controller" 00:45:36.233 } 00:45:36.233 EOF 00:45:36.233 )") 00:45:36.233 05:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:45:36.233 05:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:45:36.233 05:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:45:36.233 05:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:36.233 "params": { 00:45:36.233 "name": "Nvme1", 00:45:36.233 "trtype": "tcp", 00:45:36.233 "traddr": "10.0.0.3", 00:45:36.233 "adrfam": "ipv4", 00:45:36.233 "trsvcid": "4420", 00:45:36.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:36.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:36.233 "hdgst": false, 00:45:36.234 "ddgst": false 00:45:36.234 }, 00:45:36.234 "method": "bdev_nvme_attach_controller" 00:45:36.234 }' 00:45:36.234 [2024-11-20 05:56:56.063410] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:45:36.234 [2024-11-20 05:56:56.063507] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107220 ] 00:45:36.492 [2024-11-20 05:56:56.225336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:45:36.492 [2024-11-20 05:56:56.282679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:36.492 [2024-11-20 05:56:56.282857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:45:36.492 [2024-11-20 05:56:56.282861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:36.751 I/O targets: 00:45:36.751 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:45:36.751 00:45:36.751 00:45:36.751 CUnit - A unit testing framework for C - Version 2.1-3 00:45:36.751 http://cunit.sourceforge.net/ 00:45:36.751 00:45:36.751 00:45:36.751 Suite: bdevio tests on: Nvme1n1 00:45:36.751 Test: blockdev write read block ...passed 00:45:36.751 Test: blockdev write zeroes read block ...passed 00:45:36.751 Test: blockdev write zeroes read no split ...passed 00:45:36.751 Test: blockdev write zeroes read split ...passed 00:45:36.751 Test: blockdev write zeroes read split partial ...passed 00:45:36.751 Test: blockdev reset ...[2024-11-20 05:56:56.562638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:45:36.751 [2024-11-20 05:56:56.562718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193ff50 (9): Bad file descriptor 00:45:36.751 [2024-11-20 05:56:56.565818] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:45:36.751 passed 00:45:36.751 Test: blockdev write read 8 blocks ...passed 00:45:36.751 Test: blockdev write read size > 128k ...passed 00:45:36.751 Test: blockdev write read invalid size ...passed 00:45:36.751 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:45:36.751 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:45:36.751 Test: blockdev write read max offset ...passed 00:45:37.011 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:45:37.011 Test: blockdev writev readv 8 blocks ...passed 00:45:37.011 Test: blockdev writev readv 30 x 1block ...passed 00:45:37.011 Test: blockdev writev readv block ...passed 00:45:37.011 Test: blockdev writev readv size > 128k ...passed 00:45:37.011 Test: blockdev writev readv size > 128k in two iovs ...passed 00:45:37.011 Test: blockdev comparev and writev ...[2024-11-20 05:56:56.738054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:37.011 [2024-11-20 05:56:56.738099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:45:37.011 [2024-11-20 05:56:56.738116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:37.011 [2024-11-20 05:56:56.738127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:45:37.011 [2024-11-20 05:56:56.738638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:37.011 [2024-11-20 05:56:56.738665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:45:37.011 [2024-11-20 05:56:56.738680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:37.011 [2024-11-20 05:56:56.738691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:45:37.011 [2024-11-20 05:56:56.739153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:37.011 [2024-11-20 05:56:56.739177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:45:37.011 [2024-11-20 05:56:56.739200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:37.011 [2024-11-20 05:56:56.739211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:45:37.011 [2024-11-20 05:56:56.739722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:37.011 [2024-11-20 05:56:56.739749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:45:37.011 [2024-11-20 05:56:56.739765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:37.011 [2024-11-20 05:56:56.739775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:45:37.011 passed 00:45:37.011 Test: blockdev nvme passthru rw ...passed 00:45:37.011 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:56:56.822808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:37.011 [2024-11-20 05:56:56.822837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:45:37.011 [2024-11-20 05:56:56.823086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:37.011 [2024-11-20 05:56:56.823109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:45:37.011 [2024-11-20 05:56:56.823509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:37.011 [2024-11-20 05:56:56.823540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:45:37.011 [2024-11-20 05:56:56.823867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:37.011 [2024-11-20 05:56:56.823892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:45:37.011 passed 00:45:37.011 Test: blockdev nvme admin passthru ...passed 00:45:37.011 Test: blockdev copy ...passed 00:45:37.011 00:45:37.011 Run Summary: Type Total Ran Passed Failed Inactive 00:45:37.011 suites 1 1 n/a 0 0 00:45:37.011 tests 23 23 23 0 0 00:45:37.011 asserts 152 152 152 0 n/a 00:45:37.011 00:45:37.011 Elapsed time = 0.865 seconds 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:37.270 rmmod nvme_tcp 00:45:37.270 rmmod nvme_fabrics 00:45:37.270 rmmod nvme_keyring 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 107170 ']' 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 107170 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 107170 ']' 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 107170 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:37.270 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 107170 00:45:37.529 killing process with pid 107170 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 107170' 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 107170 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 107170 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:45:37.529 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:45:37.530 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:45:37.789 00:45:37.789 real 0m3.563s 00:45:37.789 user 0m7.084s 00:45:37.789 sys 0m1.368s 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:37.789 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:45:37.789 ************************************ 00:45:37.789 END TEST nvmf_bdevio 00:45:37.789 ************************************ 00:45:38.048 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:45:38.048 00:45:38.048 real 3m37.413s 00:45:38.048 user 9m12.685s 00:45:38.048 sys 1m28.787s 00:45:38.048 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:38.048 05:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:38.049 ************************************ 00:45:38.049 END TEST nvmf_target_core_interrupt_mode 00:45:38.049 ************************************ 00:45:38.049 05:56:57 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:45:38.049 05:56:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:45:38.049 05:56:57 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:45:38.049 05:56:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:38.049 ************************************ 00:45:38.049 START TEST nvmf_interrupt 00:45:38.049 ************************************ 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:45:38.049 * Looking for test storage... 00:45:38.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:38.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:38.049 --rc genhtml_branch_coverage=1 00:45:38.049 --rc genhtml_function_coverage=1 00:45:38.049 --rc genhtml_legend=1 00:45:38.049 --rc geninfo_all_blocks=1 00:45:38.049 --rc geninfo_unexecuted_blocks=1 00:45:38.049 00:45:38.049 ' 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:38.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:38.049 --rc genhtml_branch_coverage=1 00:45:38.049 --rc genhtml_function_coverage=1 00:45:38.049 --rc genhtml_legend=1 00:45:38.049 --rc geninfo_all_blocks=1 00:45:38.049 --rc geninfo_unexecuted_blocks=1 00:45:38.049 00:45:38.049 ' 00:45:38.049 05:56:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:38.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:38.049 --rc genhtml_branch_coverage=1 00:45:38.049 --rc genhtml_function_coverage=1 00:45:38.049 --rc genhtml_legend=1 00:45:38.049 --rc geninfo_all_blocks=1 00:45:38.049 --rc geninfo_unexecuted_blocks=1 00:45:38.049 00:45:38.049 ' 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:38.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:38.309 --rc genhtml_branch_coverage=1 00:45:38.309 --rc genhtml_function_coverage=1 00:45:38.309 --rc genhtml_legend=1 00:45:38.309 --rc geninfo_all_blocks=1 00:45:38.309 --rc geninfo_unexecuted_blocks=1 00:45:38.309 00:45:38.309 ' 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:38.309 05:56:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:38.309 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:45:38.309 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:45:38.309 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:45:38.310 Cannot find device "nvmf_init_br" 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:45:38.310 Cannot find device "nvmf_init_br2" 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:45:38.310 Cannot find device "nvmf_tgt_br" 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:45:38.310 Cannot find device "nvmf_tgt_br2" 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:45:38.310 Cannot find device "nvmf_init_br" 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:45:38.310 Cannot find device "nvmf_init_br2" 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:45:38.310 Cannot find device "nvmf_tgt_br" 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:45:38.310 Cannot find device "nvmf_tgt_br2" 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:45:38.310 Cannot find device "nvmf_br" 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:45:38.310 Cannot find device "nvmf_init_if" 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:45:38.310 Cannot find device "nvmf_init_if2" 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:38.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:38.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:45:38.310 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:45:38.569 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:38.569 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:45:38.569 00:45:38.569 --- 10.0.0.3 ping statistics --- 00:45:38.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:38.569 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:45:38.569 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:45:38.569 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:45:38.569 00:45:38.569 --- 10.0.0.4 ping statistics --- 00:45:38.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:38.569 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:38.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:38.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:45:38.569 00:45:38.569 --- 10.0.0.1 ping statistics --- 00:45:38.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:38.569 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:45:38.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:38.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:45:38.569 00:45:38.569 --- 10.0.0.2 ping statistics --- 00:45:38.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:38.569 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:45:38.569 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:45:38.829 05:56:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:45:38.829 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:45:38.829 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:38.829 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:38.829 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=107466 00:45:38.829 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:45:38.829 05:56:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 107466 00:45:38.829 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 107466 ']' 00:45:38.829 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:38.829 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:38.829 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:38.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:38.829 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:38.829 05:56:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:38.829 [2024-11-20 05:56:58.564407] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:38.829 [2024-11-20 05:56:58.565964] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:45:38.829 [2024-11-20 05:56:58.566041] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:38.829 [2024-11-20 05:56:58.728280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:39.089 [2024-11-20 05:56:58.817028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:39.089 [2024-11-20 05:56:58.817112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:39.089 [2024-11-20 05:56:58.817129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:39.089 [2024-11-20 05:56:58.817143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:39.089 [2024-11-20 05:56:58.817154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:39.089 [2024-11-20 05:56:58.818675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:39.089 [2024-11-20 05:56:58.818685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:39.089 [2024-11-20 05:56:58.993649] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:39.089 [2024-11-20 05:56:58.994820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:39.089 [2024-11-20 05:56:58.994852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:45:40.127 5000+0 records in 00:45:40.127 5000+0 records out 00:45:40.127 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0453991 s, 226 MB/s 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:40.127 AIO0 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:40.127 [2024-11-20 05:56:59.763931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:40.127 [2024-11-20 05:56:59.800309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 107466 0 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107466 0 idle 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107466 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:40.127 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107466 -w 256 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107466 root 20 0 64.2g 45312 32768 S 6.7 0.4 0:00.42 reactor_0' 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107466 root 20 0 64.2g 45312 32768 S 6.7 0.4 0:00.42 reactor_0 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 107466 1 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107466 1 idle 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107466 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107466 -w 256 00:45:40.128 05:56:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107480 root 20 0 64.2g 45312 32768 S 0.0 0.4 0:00.00 reactor_1' 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107480 root 20 0 64.2g 45312 32768 S 0.0 0.4 0:00.00 reactor_1 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=107548 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 107466 0 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 107466 0 busy 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107466 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:40.388 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107466 -w 256 00:45:40.647 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107466 root 20 0 64.2g 45312 32768 S 0.0 0.4 0:00.42 reactor_0' 00:45:40.647 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107466 root 20 0 64.2g 45312 32768 S 0.0 0.4 0:00.42 reactor_0 00:45:40.647 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:40.647 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:40.647 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:40.647 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:40.647 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:45:40.647 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:45:40.647 05:57:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:45:41.585 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:45:41.585 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:41.585 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107466 -w 256 00:45:41.585 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107466 root 20 0 64.2g 46592 33152 R 99.9 0.4 0:01.84 reactor_0' 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107466 root 20 0 64.2g 46592 33152 R 99.9 0.4 0:01.84 reactor_0 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 107466 1 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 107466 1 busy 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107466 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:41.844 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107466 -w 256 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107480 root 20 0 64.2g 46592 33152 R 73.3 0.4 0:00.82 reactor_1' 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107480 root 20 0 64.2g 46592 33152 R 73.3 0.4 0:00.82 reactor_1 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=73.3 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=73 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:41.845 05:57:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 107548 00:45:51.829 Initializing NVMe Controllers 00:45:51.829 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:45:51.829 Controller IO queue size 256, less than required. 00:45:51.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:45:51.829 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:45:51.829 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:45:51.829 Initialization complete. Launching workers. 00:45:51.829 ======================================================== 00:45:51.829 Latency(us) 00:45:51.829 Device Information : IOPS MiB/s Average min max 00:45:51.829 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 6927.40 27.06 37001.62 6375.42 99602.52 00:45:51.829 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 6124.50 23.92 41855.13 8087.21 109208.96 00:45:51.829 ======================================================== 00:45:51.829 Total : 13051.90 50.98 39279.09 6375.42 109208.96 00:45:51.829 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 107466 0 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107466 0 idle 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107466 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107466 -w 256 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107466 root 20 0 64.2g 46592 33152 S 0.0 0.4 0:14.24 reactor_0' 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107466 root 20 0 64.2g 46592 33152 S 0.0 0.4 0:14.24 reactor_0 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 107466 1 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107466 1 idle 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107466 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107466 -w 256 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107480 root 20 0 64.2g 46592 33152 S 0.0 0.4 0:06.82 reactor_1' 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107480 root 20 0 64.2g 46592 33152 S 0.0 0.4 0:06.82 reactor_1 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:45:51.829 05:57:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 107466 0 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107466 0 idle 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107466 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107466 -w 256 00:45:53.210 05:57:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:45:53.210 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107466 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:14.29 reactor_0' 00:45:53.210 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107466 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:14.29 reactor_0 00:45:53.210 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:53.210 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 107466 1 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107466 1 idle 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107466 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:45:53.470 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107466 -w 256 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107480 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:06.82 reactor_1' 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107480 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:06.82 reactor_1 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:45:53.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:45:53.471 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:53.730 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:45:53.730 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:45:53.730 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:45:53.730 05:57:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:45:53.730 05:57:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:45:53.730 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:53.730 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:45:53.730 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:53.730 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:45:53.730 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:53.730 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:53.730 rmmod nvme_tcp 00:45:53.730 rmmod nvme_fabrics 00:45:53.730 rmmod nvme_keyring 00:45:53.989 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:53.989 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:45:53.989 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:45:53.989 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 107466 ']' 00:45:53.989 05:57:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 107466 00:45:53.989 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 107466 ']' 00:45:53.989 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 107466 00:45:53.989 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:45:53.989 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:45:53.989 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 107466 00:45:53.989 killing process with pid 107466 00:45:53.989 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:45:53.989 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:45:53.989 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 107466' 00:45:53.989 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 107466 00:45:53.989 05:57:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 107466 00:45:54.249 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:45:54.249 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:54.249 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:54.249 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:45:54.249 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:54.249 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:45:54.249 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:45:54.509 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:54.509 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:45:54.509 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:45:54.509 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:45:54.509 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:45:54.509 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:45:54.509 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:45:54.509 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:45:54.509 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:45:54.509 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:45:54.509 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:45:54.509 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:45:54.509 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:45:54.509 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:54.509 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:54.769 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:45:54.769 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:54.769 05:57:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:54.769 05:57:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:54.769 05:57:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:45:54.769 00:45:54.769 real 0m16.704s 00:45:54.769 user 0m30.065s 00:45:54.769 sys 0m6.855s 00:45:54.769 05:57:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:54.769 ************************************ 00:45:54.769 END TEST nvmf_interrupt 00:45:54.769 05:57:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:45:54.769 ************************************ 00:45:54.769 ************************************ 00:45:54.769 END TEST nvmf_tcp 00:45:54.769 ************************************ 00:45:54.769 00:45:54.769 real 20m19.941s 00:45:54.769 user 51m43.022s 00:45:54.769 sys 5m57.056s 00:45:54.769 05:57:14 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:54.769 05:57:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:54.769 05:57:14 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:45:54.769 05:57:14 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:45:54.769 05:57:14 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:45:54.769 05:57:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:45:54.769 05:57:14 -- common/autotest_common.sh@10 -- # set +x 00:45:54.769 ************************************ 00:45:54.769 START TEST spdkcli_nvmf_tcp 00:45:54.769 ************************************ 00:45:54.769 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:45:55.030 * Looking for test storage... 00:45:55.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:55.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:55.030 --rc genhtml_branch_coverage=1 00:45:55.030 --rc genhtml_function_coverage=1 00:45:55.030 --rc genhtml_legend=1 00:45:55.030 --rc geninfo_all_blocks=1 00:45:55.030 --rc geninfo_unexecuted_blocks=1 00:45:55.030 00:45:55.030 ' 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:55.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:55.030 --rc genhtml_branch_coverage=1 00:45:55.030 --rc genhtml_function_coverage=1 00:45:55.030 --rc genhtml_legend=1 00:45:55.030 --rc geninfo_all_blocks=1 00:45:55.030 --rc geninfo_unexecuted_blocks=1 00:45:55.030 00:45:55.030 ' 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:55.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:55.030 --rc genhtml_branch_coverage=1 00:45:55.030 --rc genhtml_function_coverage=1 00:45:55.030 --rc genhtml_legend=1 00:45:55.030 --rc geninfo_all_blocks=1 00:45:55.030 --rc geninfo_unexecuted_blocks=1 00:45:55.030 00:45:55.030 ' 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:55.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:55.030 --rc genhtml_branch_coverage=1 00:45:55.030 --rc genhtml_function_coverage=1 00:45:55.030 --rc genhtml_legend=1 00:45:55.030 --rc geninfo_all_blocks=1 00:45:55.030 --rc geninfo_unexecuted_blocks=1 00:45:55.030 00:45:55.030 ' 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:55.030 05:57:14 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:55.031 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=107884 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 107884 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 107884 ']' 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:55.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:55.031 05:57:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:55.031 [2024-11-20 05:57:14.940777] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:45:55.031 [2024-11-20 05:57:14.941101] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107884 ] 00:45:55.291 [2024-11-20 05:57:15.099348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:45:55.550 [2024-11-20 05:57:15.222634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:55.550 [2024-11-20 05:57:15.222653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:56.118 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:56.118 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:45:56.118 05:57:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:45:56.118 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:56.118 05:57:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:56.377 05:57:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:45:56.377 05:57:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:45:56.377 05:57:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:45:56.377 05:57:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:56.377 05:57:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:56.377 05:57:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:45:56.377 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:45:56.377 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:45:56.377 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:45:56.377 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:45:56.377 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:45:56.377 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:45:56.377 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:45:56.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:45:56.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:45:56.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:56.377 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:56.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:45:56.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:56.377 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:56.377 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:45:56.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:45:56.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:45:56.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:45:56.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:56.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:45:56.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:45:56.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:45:56.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:45:56.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:45:56.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:45:56.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:45:56.378 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:45:56.378 ' 00:45:58.914 [2024-11-20 05:57:18.785843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:00.305 [2024-11-20 05:57:20.101305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:46:02.889 [2024-11-20 05:57:22.547520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:46:04.795 [2024-11-20 05:57:24.668964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:46:06.701 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:46:06.701 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:46:06.701 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:46:06.701 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:46:06.701 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:46:06.701 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:46:06.701 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:46:06.701 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:46:06.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:46:06.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:46:06.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:46:06.701 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:46:06.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:46:06.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:46:06.701 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:46:06.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:46:06.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:46:06.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:46:06.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:46:06.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:46:06.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:46:06.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:46:06.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:46:06.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:46:06.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:46:06.702 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:46:06.702 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:46:06.702 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:46:06.702 05:57:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:46:06.702 05:57:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:06.702 05:57:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:46:06.702 05:57:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:46:06.702 05:57:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:06.702 05:57:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:46:06.702 05:57:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:46:06.702 05:57:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:46:07.270 05:57:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:46:07.270 05:57:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:46:07.270 05:57:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:46:07.270 05:57:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:07.270 05:57:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:46:07.270 05:57:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:46:07.270 05:57:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:07.270 05:57:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:46:07.270 05:57:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:46:07.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:46:07.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:46:07.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:46:07.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:46:07.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:46:07.270 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:46:07.270 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:46:07.270 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:46:07.270 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:46:07.270 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:46:07.270 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:46:07.270 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:46:07.270 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:46:07.270 ' 00:46:13.839 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:46:13.839 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:46:13.839 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:46:13.839 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:46:13.839 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:46:13.839 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:46:13.839 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:46:13.839 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:46:13.839 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:46:13.839 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:46:13.839 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:46:13.839 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:46:13.839 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:46:13.839 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:46:13.839 05:57:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:46:13.839 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:13.839 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:46:13.839 05:57:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 107884 00:46:13.839 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 107884 ']' 00:46:13.839 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 107884 00:46:13.839 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:46:13.839 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:46:13.839 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 107884 00:46:13.839 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:46:13.839 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:46:13.839 killing process with pid 107884 00:46:13.839 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 107884' 00:46:13.839 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 107884 00:46:13.839 05:57:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 107884 00:46:13.839 Process with pid 107884 is not found 00:46:13.839 05:57:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:46:13.839 05:57:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:46:13.839 05:57:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 107884 ']' 00:46:13.839 05:57:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 107884 00:46:13.839 05:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 107884 ']' 00:46:13.839 05:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 107884 00:46:13.839 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (107884) - No such process 00:46:13.839 05:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 107884 is not found' 00:46:13.839 05:57:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:46:13.839 05:57:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:46:13.839 05:57:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:46:13.839 00:46:13.839 real 0m18.560s 00:46:13.839 user 0m40.047s 00:46:13.839 sys 0m1.271s 00:46:13.839 05:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:46:13.839 05:57:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:46:13.839 ************************************ 00:46:13.839 END TEST spdkcli_nvmf_tcp 00:46:13.839 ************************************ 00:46:13.839 05:57:33 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:46:13.839 05:57:33 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:46:13.839 05:57:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:46:13.839 05:57:33 -- common/autotest_common.sh@10 -- # set +x 00:46:13.839 ************************************ 00:46:13.839 START TEST nvmf_identify_passthru 00:46:13.839 ************************************ 00:46:13.839 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:46:13.839 * Looking for test storage... 00:46:13.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:13.839 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:46:13.839 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:46:13.839 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:46:13.839 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:13.839 05:57:33 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:46:13.839 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:13.839 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:46:13.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:13.839 --rc genhtml_branch_coverage=1 00:46:13.839 --rc genhtml_function_coverage=1 00:46:13.839 --rc genhtml_legend=1 00:46:13.839 --rc geninfo_all_blocks=1 00:46:13.839 --rc geninfo_unexecuted_blocks=1 00:46:13.839 00:46:13.839 ' 00:46:13.839 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:46:13.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:13.839 --rc genhtml_branch_coverage=1 00:46:13.839 --rc genhtml_function_coverage=1 00:46:13.839 --rc genhtml_legend=1 00:46:13.839 --rc geninfo_all_blocks=1 00:46:13.839 --rc geninfo_unexecuted_blocks=1 00:46:13.839 00:46:13.839 ' 00:46:13.839 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:46:13.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:13.840 --rc genhtml_branch_coverage=1 00:46:13.840 --rc genhtml_function_coverage=1 00:46:13.840 --rc genhtml_legend=1 00:46:13.840 --rc geninfo_all_blocks=1 00:46:13.840 --rc geninfo_unexecuted_blocks=1 00:46:13.840 00:46:13.840 ' 00:46:13.840 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:46:13.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:13.840 --rc genhtml_branch_coverage=1 00:46:13.840 --rc genhtml_function_coverage=1 00:46:13.840 --rc genhtml_legend=1 00:46:13.840 --rc geninfo_all_blocks=1 00:46:13.840 --rc geninfo_unexecuted_blocks=1 00:46:13.840 00:46:13.840 ' 00:46:13.840 05:57:33 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:13.840 05:57:33 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:46:13.840 05:57:33 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:13.840 05:57:33 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:13.840 05:57:33 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:13.840 05:57:33 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:13.840 05:57:33 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:13.840 05:57:33 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:13.840 05:57:33 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:46:13.840 05:57:33 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:13.840 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:13.840 05:57:33 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:13.840 05:57:33 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:46:13.840 05:57:33 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:13.840 05:57:33 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:13.840 05:57:33 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:13.840 05:57:33 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:13.840 05:57:33 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:13.840 05:57:33 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:13.840 05:57:33 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:46:13.840 05:57:33 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:13.840 05:57:33 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:13.840 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:13.840 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:46:13.840 Cannot find device "nvmf_init_br" 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:46:13.840 Cannot find device "nvmf_init_br2" 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:46:13.840 Cannot find device "nvmf_tgt_br" 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:46:13.840 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:46:13.840 Cannot find device "nvmf_tgt_br2" 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:46:13.841 Cannot find device "nvmf_init_br" 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:46:13.841 Cannot find device "nvmf_init_br2" 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:46:13.841 Cannot find device "nvmf_tgt_br" 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:46:13.841 Cannot find device "nvmf_tgt_br2" 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:46:13.841 Cannot find device "nvmf_br" 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:46:13.841 Cannot find device "nvmf_init_if" 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:46:13.841 Cannot find device "nvmf_init_if2" 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:13.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:13.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:13.841 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:46:14.101 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:14.101 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.137 ms 00:46:14.101 00:46:14.101 --- 10.0.0.3 ping statistics --- 00:46:14.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:14.101 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:46:14.101 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:46:14.101 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:46:14.101 00:46:14.101 --- 10.0.0.4 ping statistics --- 00:46:14.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:14.101 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:14.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:14.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:46:14.101 00:46:14.101 --- 10.0.0.1 ping statistics --- 00:46:14.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:14.101 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:46:14.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:14.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:46:14.101 00:46:14.101 --- 10.0.0.2 ping statistics --- 00:46:14.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:14.101 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:14.101 05:57:33 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:14.101 05:57:33 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:46:14.101 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:14.101 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:14.101 05:57:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:46:14.101 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:46:14.101 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:46:14.101 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:46:14.101 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:46:14.101 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:46:14.101 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:46:14.101 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:46:14.101 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:46:14.101 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:46:14.101 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:46:14.101 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:46:14.101 05:57:33 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:46:14.101 05:57:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:46:14.101 05:57:33 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:46:14.101 05:57:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:46:14.101 05:57:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:46:14.101 05:57:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:46:14.360 05:57:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:46:14.361 05:57:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:46:14.361 05:57:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:46:14.361 05:57:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:46:14.620 05:57:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:46:14.620 05:57:34 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:46:14.620 05:57:34 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:14.620 05:57:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:14.620 05:57:34 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:46:14.620 05:57:34 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:14.620 05:57:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:14.620 05:57:34 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=108419 00:46:14.620 05:57:34 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:46:14.620 05:57:34 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:46:14.620 05:57:34 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 108419 00:46:14.620 05:57:34 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 108419 ']' 00:46:14.620 05:57:34 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:14.620 05:57:34 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:46:14.620 05:57:34 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:14.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:14.620 05:57:34 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:46:14.620 05:57:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:14.620 [2024-11-20 05:57:34.494825] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:46:14.620 [2024-11-20 05:57:34.494918] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:14.879 [2024-11-20 05:57:34.659909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:14.879 [2024-11-20 05:57:34.764863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:14.879 [2024-11-20 05:57:34.765183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:14.879 [2024-11-20 05:57:34.765399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:14.879 [2024-11-20 05:57:34.765570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:14.879 [2024-11-20 05:57:34.765621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:14.879 [2024-11-20 05:57:34.767516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:14.879 [2024-11-20 05:57:34.767594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:14.879 [2024-11-20 05:57:34.767772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:46:14.879 [2024-11-20 05:57:34.767780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:15.818 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:46:15.818 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:46:15.818 05:57:35 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:46:15.818 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:15.818 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:15.818 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:15.818 05:57:35 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:46:15.818 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:15.818 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:15.818 [2024-11-20 05:57:35.692793] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:46:15.818 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:15.818 05:57:35 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:15.818 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:15.818 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:15.818 [2024-11-20 05:57:35.702930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:15.818 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:15.818 05:57:35 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:46:15.818 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:15.818 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:16.077 05:57:35 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:46:16.077 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:16.077 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:16.077 Nvme0n1 00:46:16.077 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:16.077 05:57:35 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:46:16.077 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:16.077 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:16.077 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:16.077 05:57:35 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:46:16.077 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:16.077 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:16.077 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:16.077 05:57:35 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:46:16.077 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:16.077 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:16.077 [2024-11-20 05:57:35.852687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:16.077 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:16.077 05:57:35 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:46:16.077 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:16.077 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:16.077 [ 00:46:16.077 { 00:46:16.077 "allow_any_host": true, 00:46:16.077 "hosts": [], 00:46:16.077 "listen_addresses": [], 00:46:16.077 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:46:16.077 "subtype": "Discovery" 00:46:16.077 }, 00:46:16.077 { 00:46:16.077 "allow_any_host": true, 00:46:16.077 "hosts": [], 00:46:16.077 "listen_addresses": [ 00:46:16.077 { 00:46:16.077 "adrfam": "IPv4", 00:46:16.077 "traddr": "10.0.0.3", 00:46:16.077 "trsvcid": "4420", 00:46:16.077 "trtype": "TCP" 00:46:16.077 } 00:46:16.077 ], 00:46:16.077 "max_cntlid": 65519, 00:46:16.077 "max_namespaces": 1, 00:46:16.077 "min_cntlid": 1, 00:46:16.077 "model_number": "SPDK bdev Controller", 00:46:16.077 "namespaces": [ 00:46:16.077 { 00:46:16.077 "bdev_name": "Nvme0n1", 00:46:16.077 "name": "Nvme0n1", 00:46:16.077 "nguid": "6036D04CD1B049748F026BB55B0B1A30", 00:46:16.077 "nsid": 1, 00:46:16.077 "uuid": "6036d04c-d1b0-4974-8f02-6bb55b0b1a30" 00:46:16.077 } 00:46:16.077 ], 00:46:16.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:16.077 "serial_number": "SPDK00000000000001", 00:46:16.077 "subtype": "NVMe" 00:46:16.077 } 00:46:16.077 ] 00:46:16.077 05:57:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:16.077 05:57:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:46:16.077 05:57:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:46:16.077 05:57:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:46:16.337 05:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:46:16.337 05:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:46:16.337 05:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:46:16.337 05:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:46:16.597 05:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:46:16.597 05:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:46:16.597 05:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:46:16.597 05:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:16.597 05:57:36 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:16.597 05:57:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:16.597 05:57:36 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:16.597 05:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:46:16.597 05:57:36 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:46:16.597 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:16.597 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:46:16.597 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:16.597 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:46:16.597 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:16.597 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:16.597 rmmod nvme_tcp 00:46:16.597 rmmod nvme_fabrics 00:46:16.597 rmmod nvme_keyring 00:46:16.597 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:16.597 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:46:16.597 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:46:16.597 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 108419 ']' 00:46:16.597 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 108419 00:46:16.597 05:57:36 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 108419 ']' 00:46:16.597 05:57:36 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 108419 00:46:16.597 05:57:36 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:46:16.597 05:57:36 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:46:16.597 05:57:36 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 108419 00:46:16.855 killing process with pid 108419 00:46:16.855 05:57:36 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:46:16.855 05:57:36 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:46:16.855 05:57:36 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 108419' 00:46:16.855 05:57:36 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 108419 00:46:16.855 05:57:36 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 108419 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:46:17.114 05:57:36 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:17.114 05:57:37 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:17.373 05:57:37 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:46:17.373 05:57:37 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:17.373 05:57:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:17.373 05:57:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:17.373 05:57:37 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:46:17.373 00:46:17.373 real 0m3.871s 00:46:17.373 user 0m8.674s 00:46:17.373 sys 0m1.179s 00:46:17.373 05:57:37 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:46:17.373 05:57:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:46:17.373 ************************************ 00:46:17.373 END TEST nvmf_identify_passthru 00:46:17.373 ************************************ 00:46:17.373 05:57:37 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:46:17.373 05:57:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:46:17.373 05:57:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:46:17.373 05:57:37 -- common/autotest_common.sh@10 -- # set +x 00:46:17.373 ************************************ 00:46:17.373 START TEST nvmf_dif 00:46:17.373 ************************************ 00:46:17.373 05:57:37 nvmf_dif -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:46:17.373 * Looking for test storage... 00:46:17.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:17.373 05:57:37 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:46:17.373 05:57:37 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:46:17.373 05:57:37 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:46:17.692 05:57:37 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:46:17.692 05:57:37 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:17.692 05:57:37 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:46:17.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:17.692 --rc genhtml_branch_coverage=1 00:46:17.692 --rc genhtml_function_coverage=1 00:46:17.692 --rc genhtml_legend=1 00:46:17.692 --rc geninfo_all_blocks=1 00:46:17.692 --rc geninfo_unexecuted_blocks=1 00:46:17.692 00:46:17.692 ' 00:46:17.692 05:57:37 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:46:17.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:17.692 --rc genhtml_branch_coverage=1 00:46:17.692 --rc genhtml_function_coverage=1 00:46:17.692 --rc genhtml_legend=1 00:46:17.692 --rc geninfo_all_blocks=1 00:46:17.692 --rc geninfo_unexecuted_blocks=1 00:46:17.692 00:46:17.692 ' 00:46:17.692 05:57:37 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:46:17.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:17.692 --rc genhtml_branch_coverage=1 00:46:17.692 --rc genhtml_function_coverage=1 00:46:17.692 --rc genhtml_legend=1 00:46:17.692 --rc geninfo_all_blocks=1 00:46:17.692 --rc geninfo_unexecuted_blocks=1 00:46:17.692 00:46:17.692 ' 00:46:17.692 05:57:37 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:46:17.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:17.692 --rc genhtml_branch_coverage=1 00:46:17.692 --rc genhtml_function_coverage=1 00:46:17.692 --rc genhtml_legend=1 00:46:17.692 --rc geninfo_all_blocks=1 00:46:17.692 --rc geninfo_unexecuted_blocks=1 00:46:17.692 00:46:17.692 ' 00:46:17.692 05:57:37 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:17.692 05:57:37 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:17.692 05:57:37 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:17.692 05:57:37 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:17.692 05:57:37 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:17.692 05:57:37 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:17.692 05:57:37 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:46:17.693 05:57:37 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:17.693 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:17.693 05:57:37 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:46:17.693 05:57:37 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:46:17.693 05:57:37 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:46:17.693 05:57:37 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:46:17.693 05:57:37 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:17.693 05:57:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:17.693 05:57:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:46:17.693 Cannot find device "nvmf_init_br" 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@162 -- # true 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:46:17.693 Cannot find device "nvmf_init_br2" 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@163 -- # true 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:46:17.693 Cannot find device "nvmf_tgt_br" 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@164 -- # true 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:46:17.693 Cannot find device "nvmf_tgt_br2" 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@165 -- # true 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:46:17.693 Cannot find device "nvmf_init_br" 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@166 -- # true 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:46:17.693 Cannot find device "nvmf_init_br2" 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@167 -- # true 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:46:17.693 Cannot find device "nvmf_tgt_br" 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@168 -- # true 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:46:17.693 Cannot find device "nvmf_tgt_br2" 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@169 -- # true 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:46:17.693 Cannot find device "nvmf_br" 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@170 -- # true 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:46:17.693 Cannot find device "nvmf_init_if" 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@171 -- # true 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:46:17.693 Cannot find device "nvmf_init_if2" 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@172 -- # true 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:17.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@173 -- # true 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:17.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@174 -- # true 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:17.693 05:57:37 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:46:17.951 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:17.951 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:46:17.951 00:46:17.951 --- 10.0.0.3 ping statistics --- 00:46:17.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:17.951 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:46:17.951 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:46:17.951 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:46:17.951 00:46:17.951 --- 10.0.0.4 ping statistics --- 00:46:17.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:17.951 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:17.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:17.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:46:17.951 00:46:17.951 --- 10.0.0.1 ping statistics --- 00:46:17.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:17.951 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:46:17.951 05:57:37 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:46:17.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:17.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:46:17.951 00:46:17.951 --- 10.0.0.2 ping statistics --- 00:46:17.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:17.952 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:46:17.952 05:57:37 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:17.952 05:57:37 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:46:17.952 05:57:37 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:46:17.952 05:57:37 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:18.515 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:46:18.515 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:18.515 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:18.515 05:57:38 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:18.515 05:57:38 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:18.515 05:57:38 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:18.515 05:57:38 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:18.515 05:57:38 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:18.515 05:57:38 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:18.515 05:57:38 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:46:18.515 05:57:38 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:46:18.515 05:57:38 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:18.515 05:57:38 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:18.515 05:57:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:18.515 05:57:38 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=108821 00:46:18.515 05:57:38 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:46:18.515 05:57:38 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 108821 00:46:18.515 05:57:38 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 108821 ']' 00:46:18.515 05:57:38 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:18.515 05:57:38 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:46:18.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:18.515 05:57:38 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:18.515 05:57:38 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:46:18.515 05:57:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:18.515 [2024-11-20 05:57:38.417795] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:46:18.515 [2024-11-20 05:57:38.417893] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:18.772 [2024-11-20 05:57:38.577481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:18.772 [2024-11-20 05:57:38.672251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:18.772 [2024-11-20 05:57:38.672324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:18.772 [2024-11-20 05:57:38.672340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:18.772 [2024-11-20 05:57:38.672354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:18.772 [2024-11-20 05:57:38.672366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:18.772 [2024-11-20 05:57:38.672863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:19.708 05:57:39 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:46:19.708 05:57:39 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:46:19.708 05:57:39 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:19.708 05:57:39 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:19.708 05:57:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:19.708 05:57:39 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:19.708 05:57:39 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:46:19.708 05:57:39 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:46:19.708 05:57:39 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:19.708 05:57:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:19.708 [2024-11-20 05:57:39.531406] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:19.708 05:57:39 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:19.708 05:57:39 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:46:19.708 05:57:39 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:46:19.708 05:57:39 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:46:19.708 05:57:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:19.708 ************************************ 00:46:19.708 START TEST fio_dif_1_default 00:46:19.708 ************************************ 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:19.708 bdev_null0 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:19.708 [2024-11-20 05:57:39.583620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:46:19.708 05:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:19.708 { 00:46:19.708 "params": { 00:46:19.708 "name": "Nvme$subsystem", 00:46:19.709 "trtype": "$TEST_TRANSPORT", 00:46:19.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:19.709 "adrfam": "ipv4", 00:46:19.709 "trsvcid": "$NVMF_PORT", 00:46:19.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:19.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:19.709 "hdgst": ${hdgst:-false}, 00:46:19.709 "ddgst": ${ddgst:-false} 00:46:19.709 }, 00:46:19.709 "method": "bdev_nvme_attach_controller" 00:46:19.709 } 00:46:19.709 EOF 00:46:19.709 )") 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:46:19.709 05:57:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:19.709 "params": { 00:46:19.709 "name": "Nvme0", 00:46:19.709 "trtype": "tcp", 00:46:19.709 "traddr": "10.0.0.3", 00:46:19.709 "adrfam": "ipv4", 00:46:19.709 "trsvcid": "4420", 00:46:19.709 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:19.709 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:19.709 "hdgst": false, 00:46:19.709 "ddgst": false 00:46:19.709 }, 00:46:19.709 "method": "bdev_nvme_attach_controller" 00:46:19.709 }' 00:46:19.967 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:46:19.967 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:46:19.967 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:19.967 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:19.967 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:19.967 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:46:19.967 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:46:19.967 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:46:19.967 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:46:19.967 05:57:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:19.967 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:19.967 fio-3.35 00:46:19.967 Starting 1 thread 00:46:32.194 00:46:32.194 filename0: (groupid=0, jobs=1): err= 0: pid=108907: Wed Nov 20 05:57:50 2024 00:46:32.194 read: IOPS=995, BW=3980KiB/s (4076kB/s)(38.9MiB/10001msec) 00:46:32.194 slat (nsec): min=5421, max=41778, avg=6291.95, stdev=1898.55 00:46:32.194 clat (usec): min=313, max=42402, avg=4001.93, stdev=11538.53 00:46:32.194 lat (usec): min=319, max=42408, avg=4008.22, stdev=11538.63 00:46:32.194 clat percentiles (usec): 00:46:32.194 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 359], 00:46:32.194 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 383], 00:46:32.194 | 70.00th=[ 392], 80.00th=[ 420], 90.00th=[ 603], 95.00th=[40633], 00:46:32.194 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:46:32.194 | 99.99th=[42206] 00:46:32.194 bw ( KiB/s): min= 1184, max= 6816, per=96.75%, avg=3851.63, stdev=1789.55, samples=19 00:46:32.194 iops : min= 296, max= 1704, avg=962.89, stdev=447.41, samples=19 00:46:32.194 lat (usec) : 500=85.41%, 750=4.98%, 1000=0.44% 00:46:32.194 lat (msec) : 2=0.24%, 50=8.92% 00:46:32.194 cpu : usr=82.79%, sys=16.51%, ctx=13, majf=0, minf=9 00:46:32.194 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:32.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:32.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:32.194 issued rwts: total=9952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:32.194 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:32.194 00:46:32.194 Run status group 0 (all jobs): 00:46:32.194 READ: bw=3980KiB/s (4076kB/s), 3980KiB/s-3980KiB/s (4076kB/s-4076kB/s), io=38.9MiB (40.8MB), run=10001-10001msec 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.194 00:46:32.194 real 0m11.158s 00:46:32.194 user 0m9.083s 00:46:32.194 sys 0m1.965s 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:46:32.194 ************************************ 00:46:32.194 END TEST fio_dif_1_default 00:46:32.194 ************************************ 00:46:32.194 05:57:50 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:46:32.194 05:57:50 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:46:32.194 05:57:50 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:46:32.194 05:57:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:32.194 ************************************ 00:46:32.194 START TEST fio_dif_1_multi_subsystems 00:46:32.194 ************************************ 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:32.194 bdev_null0 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:32.194 [2024-11-20 05:57:50.796030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:46:32.194 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:32.195 bdev_null1 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:32.195 { 00:46:32.195 "params": { 00:46:32.195 "name": "Nvme$subsystem", 00:46:32.195 "trtype": "$TEST_TRANSPORT", 00:46:32.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:32.195 "adrfam": "ipv4", 00:46:32.195 "trsvcid": "$NVMF_PORT", 00:46:32.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:32.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:32.195 "hdgst": ${hdgst:-false}, 00:46:32.195 "ddgst": ${ddgst:-false} 00:46:32.195 }, 00:46:32.195 "method": "bdev_nvme_attach_controller" 00:46:32.195 } 00:46:32.195 EOF 00:46:32.195 )") 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:32.195 { 00:46:32.195 "params": { 00:46:32.195 "name": "Nvme$subsystem", 00:46:32.195 "trtype": "$TEST_TRANSPORT", 00:46:32.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:32.195 "adrfam": "ipv4", 00:46:32.195 "trsvcid": "$NVMF_PORT", 00:46:32.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:32.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:32.195 "hdgst": ${hdgst:-false}, 00:46:32.195 "ddgst": ${ddgst:-false} 00:46:32.195 }, 00:46:32.195 "method": "bdev_nvme_attach_controller" 00:46:32.195 } 00:46:32.195 EOF 00:46:32.195 )") 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:32.195 "params": { 00:46:32.195 "name": "Nvme0", 00:46:32.195 "trtype": "tcp", 00:46:32.195 "traddr": "10.0.0.3", 00:46:32.195 "adrfam": "ipv4", 00:46:32.195 "trsvcid": "4420", 00:46:32.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:32.195 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:32.195 "hdgst": false, 00:46:32.195 "ddgst": false 00:46:32.195 }, 00:46:32.195 "method": "bdev_nvme_attach_controller" 00:46:32.195 },{ 00:46:32.195 "params": { 00:46:32.195 "name": "Nvme1", 00:46:32.195 "trtype": "tcp", 00:46:32.195 "traddr": "10.0.0.3", 00:46:32.195 "adrfam": "ipv4", 00:46:32.195 "trsvcid": "4420", 00:46:32.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:32.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:32.195 "hdgst": false, 00:46:32.195 "ddgst": false 00:46:32.195 }, 00:46:32.195 "method": "bdev_nvme_attach_controller" 00:46:32.195 }' 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:46:32.195 05:57:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:32.195 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:32.195 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:46:32.195 fio-3.35 00:46:32.195 Starting 2 threads 00:46:42.177 00:46:42.177 filename0: (groupid=0, jobs=1): err= 0: pid=109067: Wed Nov 20 05:58:01 2024 00:46:42.177 read: IOPS=219, BW=876KiB/s (897kB/s)(8768KiB/10006msec) 00:46:42.177 slat (nsec): min=5500, max=36367, avg=7487.29, stdev=3539.09 00:46:42.177 clat (usec): min=316, max=41818, avg=18236.02, stdev=20080.88 00:46:42.177 lat (usec): min=321, max=41830, avg=18243.51, stdev=20080.81 00:46:42.177 clat percentiles (usec): 00:46:42.177 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 371], 00:46:42.177 | 30.00th=[ 383], 40.00th=[ 445], 50.00th=[ 570], 60.00th=[40633], 00:46:42.177 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:46:42.177 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:46:42.177 | 99.99th=[41681] 00:46:42.177 bw ( KiB/s): min= 480, max= 2304, per=39.49%, avg=877.47, stdev=410.40, samples=19 00:46:42.177 iops : min= 120, max= 576, avg=219.37, stdev=102.60, samples=19 00:46:42.177 lat (usec) : 500=42.11%, 750=12.09%, 1000=0.18% 00:46:42.177 lat (msec) : 2=1.46%, 4=0.18%, 50=43.98% 00:46:42.177 cpu : usr=92.49%, sys=7.11%, ctx=11, majf=0, minf=0 00:46:42.177 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:42.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:42.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:42.177 issued rwts: total=2192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:42.177 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:42.177 filename1: (groupid=0, jobs=1): err= 0: pid=109068: Wed Nov 20 05:58:01 2024 00:46:42.177 read: IOPS=336, BW=1346KiB/s (1378kB/s)(13.2MiB/10021msec) 00:46:42.177 slat (nsec): min=5583, max=73164, avg=8663.63, stdev=4624.99 00:46:42.177 clat (usec): min=321, max=42460, avg=11860.86, stdev=18225.62 00:46:42.177 lat (usec): min=327, max=42471, avg=11869.52, stdev=18225.19 00:46:42.177 clat percentiles (usec): 00:46:42.177 | 1.00th=[ 338], 5.00th=[ 363], 10.00th=[ 375], 20.00th=[ 383], 00:46:42.177 | 30.00th=[ 396], 40.00th=[ 404], 50.00th=[ 420], 60.00th=[ 457], 00:46:42.177 | 70.00th=[ 685], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:46:42.177 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:46:42.177 | 99.99th=[42206] 00:46:42.177 bw ( KiB/s): min= 512, max= 2688, per=60.65%, avg=1347.20, stdev=683.80, samples=20 00:46:42.177 iops : min= 128, max= 672, avg=336.80, stdev=170.95, samples=20 00:46:42.177 lat (usec) : 500=64.41%, 750=6.35%, 1000=0.42% 00:46:42.177 lat (msec) : 2=0.39%, 4=0.21%, 50=28.23% 00:46:42.177 cpu : usr=92.70%, sys=6.54%, ctx=50, majf=0, minf=0 00:46:42.177 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:42.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:42.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:42.177 issued rwts: total=3372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:42.177 latency : target=0, window=0, percentile=100.00%, depth=4 00:46:42.177 00:46:42.177 Run status group 0 (all jobs): 00:46:42.177 READ: bw=2221KiB/s (2274kB/s), 876KiB/s-1346KiB/s (897kB/s-1378kB/s), io=21.7MiB (22.8MB), run=10006-10021msec 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:42.436 ************************************ 00:46:42.436 END TEST fio_dif_1_multi_subsystems 00:46:42.436 ************************************ 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:42.436 00:46:42.436 real 0m11.390s 00:46:42.436 user 0m19.472s 00:46:42.436 sys 0m1.792s 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:46:42.436 05:58:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:46:42.436 05:58:02 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:46:42.436 05:58:02 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:46:42.436 05:58:02 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:46:42.436 05:58:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:42.436 ************************************ 00:46:42.436 START TEST fio_dif_rand_params 00:46:42.436 ************************************ 00:46:42.436 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:46:42.436 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:46:42.436 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:46:42.436 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:46:42.436 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:46:42.436 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:46:42.436 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:46:42.436 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:46:42.436 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:46:42.436 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:46:42.436 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:42.436 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:46:42.436 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:46:42.436 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:46:42.436 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:42.437 bdev_null0 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:42.437 [2024-11-20 05:58:02.269235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:42.437 { 00:46:42.437 "params": { 00:46:42.437 "name": "Nvme$subsystem", 00:46:42.437 "trtype": "$TEST_TRANSPORT", 00:46:42.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:42.437 "adrfam": "ipv4", 00:46:42.437 "trsvcid": "$NVMF_PORT", 00:46:42.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:42.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:42.437 "hdgst": ${hdgst:-false}, 00:46:42.437 "ddgst": ${ddgst:-false} 00:46:42.437 }, 00:46:42.437 "method": "bdev_nvme_attach_controller" 00:46:42.437 } 00:46:42.437 EOF 00:46:42.437 )") 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:42.437 "params": { 00:46:42.437 "name": "Nvme0", 00:46:42.437 "trtype": "tcp", 00:46:42.437 "traddr": "10.0.0.3", 00:46:42.437 "adrfam": "ipv4", 00:46:42.437 "trsvcid": "4420", 00:46:42.437 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:42.437 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:42.437 "hdgst": false, 00:46:42.437 "ddgst": false 00:46:42.437 }, 00:46:42.437 "method": "bdev_nvme_attach_controller" 00:46:42.437 }' 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:46:42.437 05:58:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:42.697 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:46:42.697 ... 00:46:42.697 fio-3.35 00:46:42.697 Starting 3 threads 00:46:49.265 00:46:49.265 filename0: (groupid=0, jobs=1): err= 0: pid=109224: Wed Nov 20 05:58:08 2024 00:46:49.265 read: IOPS=313, BW=39.2MiB/s (41.1MB/s)(196MiB/5005msec) 00:46:49.265 slat (nsec): min=4821, max=37380, avg=10616.34, stdev=3815.80 00:46:49.265 clat (usec): min=3362, max=50504, avg=9555.55, stdev=8293.55 00:46:49.265 lat (usec): min=3372, max=50510, avg=9566.17, stdev=8293.66 00:46:49.265 clat percentiles (usec): 00:46:49.265 | 1.00th=[ 4621], 5.00th=[ 5604], 10.00th=[ 5932], 20.00th=[ 6456], 00:46:49.265 | 30.00th=[ 7504], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8586], 00:46:49.265 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9241], 95.00th=[ 9765], 00:46:49.265 | 99.00th=[49546], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:46:49.265 | 99.99th=[50594] 00:46:49.265 bw ( KiB/s): min=26112, max=48128, per=35.27%, avg=40089.60, stdev=8055.94, samples=10 00:46:49.265 iops : min= 204, max= 376, avg=313.20, stdev=62.94, samples=10 00:46:49.265 lat (msec) : 4=0.96%, 10=94.58%, 20=0.25%, 50=4.02%, 100=0.19% 00:46:49.265 cpu : usr=91.31%, sys=7.51%, ctx=10, majf=0, minf=9 00:46:49.265 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:49.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:49.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:49.265 issued rwts: total=1569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:49.265 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:49.265 filename0: (groupid=0, jobs=1): err= 0: pid=109225: Wed Nov 20 05:58:08 2024 00:46:49.265 read: IOPS=324, BW=40.5MiB/s (42.5MB/s)(203MiB/5004msec) 00:46:49.265 slat (nsec): min=3230, max=29107, avg=11167.53, stdev=4253.45 00:46:49.265 clat (usec): min=2965, max=46920, avg=9241.31, stdev=3579.86 00:46:49.265 lat (usec): min=2972, max=46927, avg=9252.48, stdev=3580.95 00:46:49.265 clat percentiles (usec): 00:46:49.265 | 1.00th=[ 3294], 5.00th=[ 3392], 10.00th=[ 3523], 20.00th=[ 6783], 00:46:49.265 | 30.00th=[ 7242], 40.00th=[ 7767], 50.00th=[ 9634], 60.00th=[11469], 00:46:49.265 | 70.00th=[11863], 80.00th=[12256], 90.00th=[12649], 95.00th=[13042], 00:46:49.265 | 99.00th=[13566], 99.50th=[13960], 99.90th=[45876], 99.95th=[46924], 00:46:49.265 | 99.99th=[46924] 00:46:49.265 bw ( KiB/s): min=33024, max=55552, per=36.21%, avg=41159.11, stdev=8525.97, samples=9 00:46:49.265 iops : min= 258, max= 434, avg=321.56, stdev=66.61, samples=9 00:46:49.265 lat (msec) : 4=13.13%, 10=37.98%, 20=48.71%, 50=0.18% 00:46:49.265 cpu : usr=91.27%, sys=7.54%, ctx=5, majf=0, minf=0 00:46:49.265 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:49.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:49.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:49.265 issued rwts: total=1622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:49.265 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:49.265 filename0: (groupid=0, jobs=1): err= 0: pid=109226: Wed Nov 20 05:58:08 2024 00:46:49.265 read: IOPS=250, BW=31.3MiB/s (32.8MB/s)(157MiB/5003msec) 00:46:49.265 slat (nsec): min=3383, max=55060, avg=13714.27, stdev=6990.75 00:46:49.265 clat (usec): min=5093, max=54144, avg=11955.80, stdev=10161.74 00:46:49.265 lat (usec): min=5100, max=54151, avg=11969.51, stdev=10161.68 00:46:49.265 clat percentiles (usec): 00:46:49.265 | 1.00th=[ 5473], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 8160], 00:46:49.265 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10159], 00:46:49.265 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11469], 95.00th=[47449], 00:46:49.265 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53216], 99.95th=[54264], 00:46:49.265 | 99.99th=[54264] 00:46:49.265 bw ( KiB/s): min=26368, max=37888, per=27.18%, avg=30890.67, stdev=3976.25, samples=9 00:46:49.265 iops : min= 206, max= 296, avg=241.33, stdev=31.06, samples=9 00:46:49.265 lat (msec) : 10=55.55%, 20=37.75%, 50=3.83%, 100=2.87% 00:46:49.265 cpu : usr=91.02%, sys=7.38%, ctx=65, majf=0, minf=0 00:46:49.265 IO depths : 1=4.1%, 2=95.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:49.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:49.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:49.265 issued rwts: total=1253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:49.265 latency : target=0, window=0, percentile=100.00%, depth=3 00:46:49.265 00:46:49.265 Run status group 0 (all jobs): 00:46:49.265 READ: bw=111MiB/s (116MB/s), 31.3MiB/s-40.5MiB/s (32.8MB/s-42.5MB/s), io=556MiB (582MB), run=5003-5005msec 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:49.265 bdev_null0 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:49.265 [2024-11-20 05:58:08.524650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:49.265 bdev_null1 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:46:49.265 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:49.266 bdev_null2 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:49.266 { 00:46:49.266 "params": { 00:46:49.266 "name": "Nvme$subsystem", 00:46:49.266 "trtype": "$TEST_TRANSPORT", 00:46:49.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:49.266 "adrfam": "ipv4", 00:46:49.266 "trsvcid": "$NVMF_PORT", 00:46:49.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:49.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:49.266 "hdgst": ${hdgst:-false}, 00:46:49.266 "ddgst": ${ddgst:-false} 00:46:49.266 }, 00:46:49.266 "method": "bdev_nvme_attach_controller" 00:46:49.266 } 00:46:49.266 EOF 00:46:49.266 )") 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:49.266 { 00:46:49.266 "params": { 00:46:49.266 "name": "Nvme$subsystem", 00:46:49.266 "trtype": "$TEST_TRANSPORT", 00:46:49.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:49.266 "adrfam": "ipv4", 00:46:49.266 "trsvcid": "$NVMF_PORT", 00:46:49.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:49.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:49.266 "hdgst": ${hdgst:-false}, 00:46:49.266 "ddgst": ${ddgst:-false} 00:46:49.266 }, 00:46:49.266 "method": "bdev_nvme_attach_controller" 00:46:49.266 } 00:46:49.266 EOF 00:46:49.266 )") 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:46:49.266 { 00:46:49.266 "params": { 00:46:49.266 "name": "Nvme$subsystem", 00:46:49.266 "trtype": "$TEST_TRANSPORT", 00:46:49.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:49.266 "adrfam": "ipv4", 00:46:49.266 "trsvcid": "$NVMF_PORT", 00:46:49.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:49.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:49.266 "hdgst": ${hdgst:-false}, 00:46:49.266 "ddgst": ${ddgst:-false} 00:46:49.266 }, 00:46:49.266 "method": "bdev_nvme_attach_controller" 00:46:49.266 } 00:46:49.266 EOF 00:46:49.266 )") 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:46:49.266 "params": { 00:46:49.266 "name": "Nvme0", 00:46:49.266 "trtype": "tcp", 00:46:49.266 "traddr": "10.0.0.3", 00:46:49.266 "adrfam": "ipv4", 00:46:49.266 "trsvcid": "4420", 00:46:49.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:49.266 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:49.266 "hdgst": false, 00:46:49.266 "ddgst": false 00:46:49.266 }, 00:46:49.266 "method": "bdev_nvme_attach_controller" 00:46:49.266 },{ 00:46:49.266 "params": { 00:46:49.266 "name": "Nvme1", 00:46:49.266 "trtype": "tcp", 00:46:49.266 "traddr": "10.0.0.3", 00:46:49.266 "adrfam": "ipv4", 00:46:49.266 "trsvcid": "4420", 00:46:49.266 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:49.266 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:49.266 "hdgst": false, 00:46:49.266 "ddgst": false 00:46:49.266 }, 00:46:49.266 "method": "bdev_nvme_attach_controller" 00:46:49.266 },{ 00:46:49.266 "params": { 00:46:49.266 "name": "Nvme2", 00:46:49.266 "trtype": "tcp", 00:46:49.266 "traddr": "10.0.0.3", 00:46:49.266 "adrfam": "ipv4", 00:46:49.266 "trsvcid": "4420", 00:46:49.266 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:46:49.266 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:46:49.266 "hdgst": false, 00:46:49.266 "ddgst": false 00:46:49.266 }, 00:46:49.266 "method": "bdev_nvme_attach_controller" 00:46:49.266 }' 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:46:49.266 05:58:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:46:49.266 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:49.266 ... 00:46:49.266 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:49.266 ... 00:46:49.266 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:46:49.266 ... 00:46:49.266 fio-3.35 00:46:49.266 Starting 24 threads 00:47:01.476 00:47:01.476 filename0: (groupid=0, jobs=1): err= 0: pid=109321: Wed Nov 20 05:58:19 2024 00:47:01.476 read: IOPS=273, BW=1092KiB/s (1118kB/s)(10.7MiB/10042msec) 00:47:01.476 slat (usec): min=5, max=4016, avg=13.97, stdev=77.04 00:47:01.476 clat (msec): min=14, max=143, avg=58.48, stdev=20.23 00:47:01.476 lat (msec): min=14, max=144, avg=58.49, stdev=20.23 00:47:01.476 clat percentiles (msec): 00:47:01.476 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 42], 00:47:01.476 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 57], 60.00th=[ 61], 00:47:01.476 | 70.00th=[ 67], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 94], 00:47:01.476 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:47:01.476 | 99.99th=[ 144] 00:47:01.476 bw ( KiB/s): min= 784, max= 1584, per=4.17%, avg=1090.40, stdev=188.21, samples=20 00:47:01.476 iops : min= 196, max= 396, avg=272.60, stdev=47.05, samples=20 00:47:01.476 lat (msec) : 20=1.57%, 50=35.19%, 100=60.58%, 250=2.66% 00:47:01.476 cpu : usr=37.99%, sys=1.11%, ctx=1119, majf=0, minf=9 00:47:01.476 IO depths : 1=1.6%, 2=3.6%, 4=11.6%, 8=71.3%, 16=12.0%, 32=0.0%, >=64=0.0% 00:47:01.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.476 complete : 0=0.0%, 4=90.6%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.476 issued rwts: total=2742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.476 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.476 filename0: (groupid=0, jobs=1): err= 0: pid=109322: Wed Nov 20 05:58:19 2024 00:47:01.476 read: IOPS=251, BW=1005KiB/s (1029kB/s)(9.82MiB/10004msec) 00:47:01.476 slat (usec): min=4, max=8015, avg=16.02, stdev=159.97 00:47:01.476 clat (msec): min=3, max=165, avg=63.55, stdev=21.03 00:47:01.476 lat (msec): min=3, max=165, avg=63.57, stdev=21.03 00:47:01.476 clat percentiles (msec): 00:47:01.476 | 1.00th=[ 5], 5.00th=[ 34], 10.00th=[ 47], 20.00th=[ 52], 00:47:01.476 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 67], 00:47:01.476 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 95], 00:47:01.476 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 165], 99.95th=[ 165], 00:47:01.476 | 99.99th=[ 165] 00:47:01.476 bw ( KiB/s): min= 768, max= 1152, per=3.69%, avg=964.00, stdev=110.04, samples=19 00:47:01.476 iops : min= 192, max= 288, avg=241.00, stdev=27.51, samples=19 00:47:01.476 lat (msec) : 4=0.64%, 10=3.18%, 50=14.84%, 100=78.76%, 250=2.59% 00:47:01.476 cpu : usr=41.34%, sys=1.32%, ctx=1216, majf=0, minf=9 00:47:01.476 IO depths : 1=3.4%, 2=7.7%, 4=19.1%, 8=60.5%, 16=9.3%, 32=0.0%, >=64=0.0% 00:47:01.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.476 complete : 0=0.0%, 4=92.5%, 8=2.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.476 issued rwts: total=2514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.476 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.476 filename0: (groupid=0, jobs=1): err= 0: pid=109323: Wed Nov 20 05:58:19 2024 00:47:01.476 read: IOPS=263, BW=1054KiB/s (1079kB/s)(10.3MiB/10031msec) 00:47:01.476 slat (usec): min=4, max=4022, avg=15.08, stdev=83.03 00:47:01.476 clat (msec): min=22, max=135, avg=60.61, stdev=18.95 00:47:01.476 lat (msec): min=22, max=135, avg=60.63, stdev=18.94 00:47:01.476 clat percentiles (msec): 00:47:01.476 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 46], 00:47:01.476 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 00:47:01.476 | 70.00th=[ 69], 80.00th=[ 78], 90.00th=[ 87], 95.00th=[ 95], 00:47:01.476 | 99.00th=[ 108], 99.50th=[ 116], 99.90th=[ 136], 99.95th=[ 136], 00:47:01.476 | 99.99th=[ 136] 00:47:01.476 bw ( KiB/s): min= 720, max= 1296, per=4.01%, avg=1049.95, stdev=150.23, samples=20 00:47:01.476 iops : min= 180, max= 324, avg=262.45, stdev=37.53, samples=20 00:47:01.476 lat (msec) : 50=34.77%, 100=61.75%, 250=3.48% 00:47:01.476 cpu : usr=32.33%, sys=0.86%, ctx=951, majf=0, minf=9 00:47:01.476 IO depths : 1=1.2%, 2=2.8%, 4=10.9%, 8=72.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:47:01.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.476 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.476 issued rwts: total=2643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.476 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.476 filename0: (groupid=0, jobs=1): err= 0: pid=109324: Wed Nov 20 05:58:19 2024 00:47:01.476 read: IOPS=285, BW=1142KiB/s (1169kB/s)(11.2MiB/10029msec) 00:47:01.476 slat (usec): min=4, max=8032, avg=19.33, stdev=198.38 00:47:01.476 clat (msec): min=15, max=166, avg=55.90, stdev=20.75 00:47:01.476 lat (msec): min=15, max=166, avg=55.92, stdev=20.76 00:47:01.476 clat percentiles (msec): 00:47:01.476 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 40], 00:47:01.476 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 58], 00:47:01.476 | 70.00th=[ 62], 80.00th=[ 71], 90.00th=[ 82], 95.00th=[ 90], 00:47:01.476 | 99.00th=[ 127], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 167], 00:47:01.476 | 99.99th=[ 167] 00:47:01.476 bw ( KiB/s): min= 768, max= 1523, per=4.35%, avg=1137.45, stdev=254.38, samples=20 00:47:01.476 iops : min= 192, max= 380, avg=284.30, stdev=63.56, samples=20 00:47:01.476 lat (msec) : 20=1.68%, 50=45.44%, 100=49.67%, 250=3.21% 00:47:01.476 cpu : usr=41.06%, sys=1.38%, ctx=1104, majf=0, minf=9 00:47:01.476 IO depths : 1=1.7%, 2=3.8%, 4=12.2%, 8=70.8%, 16=11.5%, 32=0.0%, >=64=0.0% 00:47:01.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.476 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.476 issued rwts: total=2863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.476 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.476 filename0: (groupid=0, jobs=1): err= 0: pid=109325: Wed Nov 20 05:58:19 2024 00:47:01.476 read: IOPS=252, BW=1010KiB/s (1034kB/s)(9.86MiB/10002msec) 00:47:01.476 slat (usec): min=4, max=4036, avg=21.91, stdev=178.89 00:47:01.476 clat (msec): min=4, max=158, avg=63.20, stdev=21.87 00:47:01.476 lat (msec): min=4, max=158, avg=63.23, stdev=21.87 00:47:01.476 clat percentiles (msec): 00:47:01.476 | 1.00th=[ 6], 5.00th=[ 35], 10.00th=[ 43], 20.00th=[ 51], 00:47:01.476 | 30.00th=[ 54], 40.00th=[ 57], 50.00th=[ 59], 60.00th=[ 64], 00:47:01.476 | 70.00th=[ 71], 80.00th=[ 80], 90.00th=[ 89], 95.00th=[ 108], 00:47:01.476 | 99.00th=[ 138], 99.50th=[ 138], 99.90th=[ 159], 99.95th=[ 159], 00:47:01.476 | 99.99th=[ 159] 00:47:01.476 bw ( KiB/s): min= 768, max= 1256, per=3.70%, avg=967.26, stdev=148.82, samples=19 00:47:01.476 iops : min= 192, max= 314, avg=241.74, stdev=37.18, samples=19 00:47:01.476 lat (msec) : 10=3.17%, 50=17.23%, 100=73.94%, 250=5.66% 00:47:01.476 cpu : usr=39.91%, sys=1.14%, ctx=1174, majf=0, minf=9 00:47:01.476 IO depths : 1=2.4%, 2=6.1%, 4=17.0%, 8=63.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:47:01.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.476 complete : 0=0.0%, 4=92.1%, 8=2.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.476 issued rwts: total=2525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.476 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.476 filename0: (groupid=0, jobs=1): err= 0: pid=109326: Wed Nov 20 05:58:19 2024 00:47:01.476 read: IOPS=277, BW=1108KiB/s (1135kB/s)(10.8MiB/10007msec) 00:47:01.476 slat (usec): min=4, max=4035, avg=16.77, stdev=132.29 00:47:01.476 clat (msec): min=11, max=167, avg=57.62, stdev=18.29 00:47:01.476 lat (msec): min=11, max=167, avg=57.64, stdev=18.29 00:47:01.476 clat percentiles (msec): 00:47:01.476 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 44], 00:47:01.476 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 60], 00:47:01.476 | 70.00th=[ 63], 80.00th=[ 71], 90.00th=[ 83], 95.00th=[ 89], 00:47:01.476 | 99.00th=[ 115], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 167], 00:47:01.476 | 99.99th=[ 167] 00:47:01.476 bw ( KiB/s): min= 768, max= 1408, per=4.18%, avg=1093.32, stdev=179.98, samples=19 00:47:01.476 iops : min= 192, max= 352, avg=273.32, stdev=44.99, samples=19 00:47:01.476 lat (msec) : 20=0.36%, 50=38.80%, 100=59.11%, 250=1.73% 00:47:01.476 cpu : usr=39.21%, sys=1.21%, ctx=1082, majf=0, minf=9 00:47:01.476 IO depths : 1=1.4%, 2=3.5%, 4=11.6%, 8=71.4%, 16=12.0%, 32=0.0%, >=64=0.0% 00:47:01.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.476 complete : 0=0.0%, 4=90.6%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.476 issued rwts: total=2773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.476 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.476 filename0: (groupid=0, jobs=1): err= 0: pid=109327: Wed Nov 20 05:58:19 2024 00:47:01.476 read: IOPS=248, BW=993KiB/s (1016kB/s)(9928KiB/10002msec) 00:47:01.476 slat (usec): min=4, max=6890, avg=16.20, stdev=149.42 00:47:01.476 clat (msec): min=12, max=167, avg=64.33, stdev=18.69 00:47:01.476 lat (msec): min=12, max=167, avg=64.35, stdev=18.69 00:47:01.476 clat percentiles (msec): 00:47:01.476 | 1.00th=[ 31], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 52], 00:47:01.476 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 66], 00:47:01.476 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 87], 95.00th=[ 93], 00:47:01.476 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 167], 99.95th=[ 167], 00:47:01.476 | 99.99th=[ 167] 00:47:01.476 bw ( KiB/s): min= 768, max= 1152, per=3.77%, avg=984.42, stdev=114.34, samples=19 00:47:01.476 iops : min= 192, max= 288, avg=246.11, stdev=28.58, samples=19 00:47:01.476 lat (msec) : 20=0.64%, 50=17.53%, 100=78.36%, 250=3.46% 00:47:01.476 cpu : usr=40.73%, sys=1.25%, ctx=1394, majf=0, minf=9 00:47:01.476 IO depths : 1=2.7%, 2=6.3%, 4=17.3%, 8=63.6%, 16=10.1%, 32=0.0%, >=64=0.0% 00:47:01.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.476 complete : 0=0.0%, 4=92.0%, 8=2.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.476 issued rwts: total=2482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.476 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.476 filename0: (groupid=0, jobs=1): err= 0: pid=109328: Wed Nov 20 05:58:19 2024 00:47:01.476 read: IOPS=331, BW=1326KiB/s (1358kB/s)(13.0MiB/10036msec) 00:47:01.476 slat (usec): min=4, max=3019, avg=12.70, stdev=66.28 00:47:01.476 clat (msec): min=2, max=155, avg=48.14, stdev=20.84 00:47:01.477 lat (msec): min=2, max=155, avg=48.16, stdev=20.84 00:47:01.477 clat percentiles (msec): 00:47:01.477 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 27], 20.00th=[ 36], 00:47:01.477 | 30.00th=[ 37], 40.00th=[ 42], 50.00th=[ 47], 60.00th=[ 51], 00:47:01.477 | 70.00th=[ 58], 80.00th=[ 62], 90.00th=[ 73], 95.00th=[ 84], 00:47:01.477 | 99.00th=[ 109], 99.50th=[ 122], 99.90th=[ 157], 99.95th=[ 157], 00:47:01.477 | 99.99th=[ 157] 00:47:01.477 bw ( KiB/s): min= 784, max= 3180, per=5.07%, avg=1325.80, stdev=470.63, samples=20 00:47:01.477 iops : min= 196, max= 795, avg=331.45, stdev=117.66, samples=20 00:47:01.477 lat (msec) : 4=2.89%, 10=1.92%, 20=2.89%, 50=52.15%, 100=38.92% 00:47:01.477 lat (msec) : 250=1.23% 00:47:01.477 cpu : usr=43.81%, sys=1.58%, ctx=1851, majf=0, minf=0 00:47:01.477 IO depths : 1=0.8%, 2=1.8%, 4=8.1%, 8=76.5%, 16=12.7%, 32=0.0%, >=64=0.0% 00:47:01.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.477 complete : 0=0.0%, 4=89.6%, 8=5.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.477 issued rwts: total=3327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.477 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.477 filename1: (groupid=0, jobs=1): err= 0: pid=109329: Wed Nov 20 05:58:19 2024 00:47:01.477 read: IOPS=264, BW=1056KiB/s (1082kB/s)(10.3MiB/10032msec) 00:47:01.477 slat (usec): min=5, max=8016, avg=18.98, stdev=207.08 00:47:01.477 clat (msec): min=7, max=145, avg=60.36, stdev=18.81 00:47:01.477 lat (msec): min=7, max=145, avg=60.38, stdev=18.82 00:47:01.477 clat percentiles (msec): 00:47:01.477 | 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 48], 00:47:01.477 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 62], 00:47:01.477 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 94], 00:47:01.477 | 99.00th=[ 118], 99.50th=[ 136], 99.90th=[ 146], 99.95th=[ 146], 00:47:01.477 | 99.99th=[ 146] 00:47:01.477 bw ( KiB/s): min= 768, max= 1408, per=4.03%, avg=1053.20, stdev=173.67, samples=20 00:47:01.477 iops : min= 192, max= 352, avg=263.30, stdev=43.42, samples=20 00:47:01.477 lat (msec) : 10=0.60%, 20=0.60%, 50=30.62%, 100=66.10%, 250=2.08% 00:47:01.477 cpu : usr=35.14%, sys=1.14%, ctx=1081, majf=0, minf=9 00:47:01.477 IO depths : 1=1.7%, 2=3.8%, 4=13.7%, 8=69.2%, 16=11.6%, 32=0.0%, >=64=0.0% 00:47:01.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.477 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.477 issued rwts: total=2649,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.477 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.477 filename1: (groupid=0, jobs=1): err= 0: pid=109330: Wed Nov 20 05:58:19 2024 00:47:01.477 read: IOPS=245, BW=982KiB/s (1005kB/s)(9828KiB/10009msec) 00:47:01.477 slat (usec): min=4, max=8027, avg=17.70, stdev=177.70 00:47:01.477 clat (msec): min=27, max=145, avg=65.06, stdev=18.79 00:47:01.477 lat (msec): min=27, max=145, avg=65.07, stdev=18.80 00:47:01.477 clat percentiles (msec): 00:47:01.477 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 49], 00:47:01.477 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 70], 00:47:01.477 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 91], 95.00th=[ 96], 00:47:01.477 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 146], 99.95th=[ 146], 00:47:01.477 | 99.99th=[ 146] 00:47:01.477 bw ( KiB/s): min= 768, max= 1256, per=3.73%, avg=974.00, stdev=114.71, samples=19 00:47:01.477 iops : min= 192, max= 314, avg=243.47, stdev=28.67, samples=19 00:47:01.477 lat (msec) : 50=24.18%, 100=72.00%, 250=3.83% 00:47:01.477 cpu : usr=34.72%, sys=1.19%, ctx=1137, majf=0, minf=9 00:47:01.477 IO depths : 1=2.1%, 2=4.7%, 4=14.0%, 8=68.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:47:01.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.477 complete : 0=0.0%, 4=91.0%, 8=3.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.477 issued rwts: total=2457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.477 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.477 filename1: (groupid=0, jobs=1): err= 0: pid=109331: Wed Nov 20 05:58:19 2024 00:47:01.477 read: IOPS=325, BW=1301KiB/s (1332kB/s)(12.7MiB/10028msec) 00:47:01.477 slat (usec): min=4, max=8027, avg=14.96, stdev=158.92 00:47:01.477 clat (usec): min=3386, max=98534, avg=49070.04, stdev=16983.04 00:47:01.477 lat (usec): min=3397, max=98548, avg=49085.00, stdev=16982.17 00:47:01.477 clat percentiles (usec): 00:47:01.477 | 1.00th=[ 3490], 5.00th=[21103], 10.00th=[32637], 20.00th=[36439], 00:47:01.477 | 30.00th=[39584], 40.00th=[45351], 50.00th=[47973], 60.00th=[50594], 00:47:01.477 | 70.00th=[57934], 80.00th=[60031], 90.00th=[71828], 95.00th=[78119], 00:47:01.477 | 99.00th=[94897], 99.50th=[95945], 99.90th=[98042], 99.95th=[98042], 00:47:01.477 | 99.99th=[98042] 00:47:01.477 bw ( KiB/s): min= 1040, max= 2560, per=4.97%, avg=1298.40, stdev=326.01, samples=20 00:47:01.477 iops : min= 260, max= 640, avg=324.60, stdev=81.50, samples=20 00:47:01.477 lat (msec) : 4=1.96%, 10=0.98%, 20=1.96%, 50=53.53%, 100=41.57% 00:47:01.477 cpu : usr=42.84%, sys=1.05%, ctx=1012, majf=0, minf=9 00:47:01.477 IO depths : 1=0.6%, 2=1.5%, 4=8.7%, 8=76.5%, 16=12.7%, 32=0.0%, >=64=0.0% 00:47:01.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.477 complete : 0=0.0%, 4=89.3%, 8=6.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.477 issued rwts: total=3262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.477 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.477 filename1: (groupid=0, jobs=1): err= 0: pid=109332: Wed Nov 20 05:58:19 2024 00:47:01.477 read: IOPS=265, BW=1060KiB/s (1086kB/s)(10.4MiB/10004msec) 00:47:01.477 slat (usec): min=4, max=8026, avg=17.15, stdev=156.05 00:47:01.477 clat (msec): min=3, max=172, avg=60.25, stdev=22.96 00:47:01.477 lat (msec): min=3, max=172, avg=60.26, stdev=22.96 00:47:01.477 clat percentiles (msec): 00:47:01.477 | 1.00th=[ 6], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 44], 00:47:01.477 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 62], 00:47:01.477 | 70.00th=[ 69], 80.00th=[ 74], 90.00th=[ 87], 95.00th=[ 108], 00:47:01.477 | 99.00th=[ 120], 99.50th=[ 159], 99.90th=[ 174], 99.95th=[ 174], 00:47:01.477 | 99.99th=[ 174] 00:47:01.477 bw ( KiB/s): min= 728, max= 1456, per=3.91%, avg=1022.00, stdev=189.61, samples=19 00:47:01.477 iops : min= 182, max= 364, avg=255.47, stdev=47.41, samples=19 00:47:01.477 lat (msec) : 4=0.60%, 10=2.41%, 50=30.32%, 100=60.52%, 250=6.15% 00:47:01.477 cpu : usr=38.31%, sys=0.97%, ctx=1104, majf=0, minf=9 00:47:01.477 IO depths : 1=1.9%, 2=4.6%, 4=14.1%, 8=68.0%, 16=11.3%, 32=0.0%, >=64=0.0% 00:47:01.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.477 complete : 0=0.0%, 4=91.1%, 8=4.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.477 issued rwts: total=2652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.477 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.477 filename1: (groupid=0, jobs=1): err= 0: pid=109333: Wed Nov 20 05:58:19 2024 00:47:01.477 read: IOPS=263, BW=1056KiB/s (1081kB/s)(10.3MiB/10002msec) 00:47:01.477 slat (usec): min=2, max=8019, avg=21.42, stdev=220.57 00:47:01.477 clat (usec): min=983, max=147974, avg=60473.49, stdev=22228.26 00:47:01.477 lat (usec): min=991, max=147996, avg=60494.91, stdev=22225.08 00:47:01.477 clat percentiles (msec): 00:47:01.477 | 1.00th=[ 4], 5.00th=[ 30], 10.00th=[ 37], 20.00th=[ 45], 00:47:01.477 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 63], 00:47:01.477 | 70.00th=[ 69], 80.00th=[ 77], 90.00th=[ 87], 95.00th=[ 96], 00:47:01.477 | 99.00th=[ 134], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 148], 00:47:01.477 | 99.99th=[ 148] 00:47:01.477 bw ( KiB/s): min= 640, max= 1408, per=3.87%, avg=1010.53, stdev=203.33, samples=19 00:47:01.477 iops : min= 160, max= 352, avg=252.63, stdev=50.83, samples=19 00:47:01.477 lat (usec) : 1000=0.08% 00:47:01.477 lat (msec) : 4=1.48%, 10=2.08%, 20=0.19%, 50=23.98%, 100=68.22% 00:47:01.477 lat (msec) : 250=3.98% 00:47:01.477 cpu : usr=39.46%, sys=1.21%, ctx=1108, majf=0, minf=9 00:47:01.477 IO depths : 1=1.2%, 2=2.8%, 4=10.3%, 8=72.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:47:01.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.477 complete : 0=0.0%, 4=90.6%, 8=5.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.477 issued rwts: total=2640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.477 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.477 filename1: (groupid=0, jobs=1): err= 0: pid=109334: Wed Nov 20 05:58:19 2024 00:47:01.477 read: IOPS=269, BW=1078KiB/s (1104kB/s)(10.5MiB/10018msec) 00:47:01.477 slat (usec): min=2, max=4022, avg=18.22, stdev=147.66 00:47:01.477 clat (msec): min=15, max=136, avg=59.22, stdev=19.47 00:47:01.477 lat (msec): min=15, max=136, avg=59.23, stdev=19.47 00:47:01.477 clat percentiles (msec): 00:47:01.477 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 46], 00:47:01.477 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 61], 00:47:01.477 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 94], 00:47:01.477 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 138], 99.95th=[ 138], 00:47:01.477 | 99.99th=[ 138] 00:47:01.477 bw ( KiB/s): min= 768, max= 1428, per=4.05%, avg=1059.11, stdev=192.96, samples=19 00:47:01.477 iops : min= 192, max= 357, avg=264.74, stdev=48.24, samples=19 00:47:01.477 lat (msec) : 20=0.59%, 50=34.19%, 100=61.96%, 250=3.26% 00:47:01.477 cpu : usr=38.54%, sys=1.36%, ctx=1182, majf=0, minf=9 00:47:01.477 IO depths : 1=2.3%, 2=5.3%, 4=15.0%, 8=66.7%, 16=10.7%, 32=0.0%, >=64=0.0% 00:47:01.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.477 complete : 0=0.0%, 4=91.2%, 8=3.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.477 issued rwts: total=2700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.477 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.477 filename1: (groupid=0, jobs=1): err= 0: pid=109335: Wed Nov 20 05:58:19 2024 00:47:01.477 read: IOPS=260, BW=1044KiB/s (1069kB/s)(10.2MiB/10002msec) 00:47:01.477 slat (usec): min=4, max=4044, avg=14.00, stdev=79.45 00:47:01.477 clat (msec): min=6, max=143, avg=61.21, stdev=20.22 00:47:01.477 lat (msec): min=6, max=143, avg=61.22, stdev=20.22 00:47:01.477 clat percentiles (msec): 00:47:01.477 | 1.00th=[ 15], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 48], 00:47:01.477 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 61], 00:47:01.477 | 70.00th=[ 68], 80.00th=[ 77], 90.00th=[ 85], 95.00th=[ 96], 00:47:01.477 | 99.00th=[ 124], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:47:01.477 | 99.99th=[ 144] 00:47:01.477 bw ( KiB/s): min= 688, max= 1280, per=3.92%, avg=1024.84, stdev=176.52, samples=19 00:47:01.478 iops : min= 172, max= 320, avg=256.21, stdev=44.13, samples=19 00:47:01.478 lat (msec) : 10=0.96%, 20=0.27%, 50=26.78%, 100=67.43%, 250=4.56% 00:47:01.478 cpu : usr=40.67%, sys=1.33%, ctx=1487, majf=0, minf=9 00:47:01.478 IO depths : 1=2.0%, 2=4.7%, 4=12.8%, 8=68.9%, 16=11.5%, 32=0.0%, >=64=0.0% 00:47:01.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 complete : 0=0.0%, 4=91.1%, 8=4.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 issued rwts: total=2610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.478 filename1: (groupid=0, jobs=1): err= 0: pid=109336: Wed Nov 20 05:58:19 2024 00:47:01.478 read: IOPS=260, BW=1043KiB/s (1069kB/s)(10.2MiB/10001msec) 00:47:01.478 slat (usec): min=4, max=4540, avg=18.25, stdev=142.38 00:47:01.478 clat (usec): min=1206, max=166334, avg=61212.33, stdev=26448.34 00:47:01.478 lat (usec): min=1213, max=166346, avg=61230.58, stdev=26447.96 00:47:01.478 clat percentiles (usec): 00:47:01.478 | 1.00th=[ 1237], 5.00th=[ 3326], 10.00th=[ 34866], 20.00th=[ 47973], 00:47:01.478 | 30.00th=[ 51643], 40.00th=[ 57410], 50.00th=[ 60031], 60.00th=[ 66847], 00:47:01.478 | 70.00th=[ 71828], 80.00th=[ 78119], 90.00th=[ 94897], 95.00th=[104334], 00:47:01.478 | 99.00th=[131597], 99.50th=[139461], 99.90th=[166724], 99.95th=[166724], 00:47:01.478 | 99.99th=[166724] 00:47:01.478 bw ( KiB/s): min= 528, max= 1152, per=3.62%, avg=946.95, stdev=163.89, samples=19 00:47:01.478 iops : min= 132, max= 288, avg=236.74, stdev=40.97, samples=19 00:47:01.478 lat (msec) : 2=4.91%, 4=0.61%, 10=3.07%, 50=17.82%, 100=68.00% 00:47:01.478 lat (msec) : 250=5.60% 00:47:01.478 cpu : usr=34.39%, sys=1.12%, ctx=1137, majf=0, minf=9 00:47:01.478 IO depths : 1=1.9%, 2=4.6%, 4=15.4%, 8=66.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:47:01.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 complete : 0=0.0%, 4=91.6%, 8=3.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 issued rwts: total=2609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.478 filename2: (groupid=0, jobs=1): err= 0: pid=109337: Wed Nov 20 05:58:19 2024 00:47:01.478 read: IOPS=313, BW=1254KiB/s (1284kB/s)(12.3MiB/10013msec) 00:47:01.478 slat (usec): min=5, max=4019, avg=17.85, stdev=154.46 00:47:01.478 clat (msec): min=15, max=132, avg=50.92, stdev=20.38 00:47:01.478 lat (msec): min=15, max=132, avg=50.94, stdev=20.38 00:47:01.478 clat percentiles (msec): 00:47:01.478 | 1.00th=[ 16], 5.00th=[ 27], 10.00th=[ 33], 20.00th=[ 36], 00:47:01.478 | 30.00th=[ 39], 40.00th=[ 41], 50.00th=[ 47], 60.00th=[ 51], 00:47:01.478 | 70.00th=[ 57], 80.00th=[ 64], 90.00th=[ 81], 95.00th=[ 93], 00:47:01.478 | 99.00th=[ 124], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 132], 00:47:01.478 | 99.99th=[ 132] 00:47:01.478 bw ( KiB/s): min= 768, max= 2024, per=4.79%, avg=1251.60, stdev=316.87, samples=20 00:47:01.478 iops : min= 192, max= 506, avg=312.90, stdev=79.22, samples=20 00:47:01.478 lat (msec) : 20=2.20%, 50=57.98%, 100=36.22%, 250=3.60% 00:47:01.478 cpu : usr=40.65%, sys=1.08%, ctx=1159, majf=0, minf=9 00:47:01.478 IO depths : 1=0.5%, 2=1.2%, 4=7.1%, 8=78.0%, 16=13.1%, 32=0.0%, >=64=0.0% 00:47:01.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 complete : 0=0.0%, 4=89.5%, 8=6.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 issued rwts: total=3139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.478 filename2: (groupid=0, jobs=1): err= 0: pid=109338: Wed Nov 20 05:58:19 2024 00:47:01.478 read: IOPS=310, BW=1242KiB/s (1272kB/s)(12.2MiB/10035msec) 00:47:01.478 slat (usec): min=4, max=8014, avg=16.98, stdev=186.48 00:47:01.478 clat (msec): min=2, max=132, avg=51.35, stdev=20.50 00:47:01.478 lat (msec): min=2, max=132, avg=51.37, stdev=20.51 00:47:01.478 clat percentiles (msec): 00:47:01.478 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 34], 20.00th=[ 36], 00:47:01.478 | 30.00th=[ 42], 40.00th=[ 48], 50.00th=[ 49], 60.00th=[ 55], 00:47:01.478 | 70.00th=[ 60], 80.00th=[ 64], 90.00th=[ 74], 95.00th=[ 86], 00:47:01.478 | 99.00th=[ 115], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 133], 00:47:01.478 | 99.99th=[ 133] 00:47:01.478 bw ( KiB/s): min= 768, max= 2432, per=4.75%, avg=1240.00, stdev=335.36, samples=20 00:47:01.478 iops : min= 192, max= 608, avg=310.00, stdev=83.84, samples=20 00:47:01.478 lat (msec) : 4=2.57%, 10=1.54%, 20=1.51%, 50=46.47%, 100=45.76% 00:47:01.478 lat (msec) : 250=2.15% 00:47:01.478 cpu : usr=33.94%, sys=0.90%, ctx=936, majf=0, minf=9 00:47:01.478 IO depths : 1=0.9%, 2=2.2%, 4=8.8%, 8=75.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:47:01.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 complete : 0=0.0%, 4=89.5%, 8=6.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 issued rwts: total=3116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.478 filename2: (groupid=0, jobs=1): err= 0: pid=109339: Wed Nov 20 05:58:19 2024 00:47:01.478 read: IOPS=267, BW=1070KiB/s (1096kB/s)(10.5MiB/10040msec) 00:47:01.478 slat (usec): min=5, max=11040, avg=34.08, stdev=434.17 00:47:01.478 clat (msec): min=7, max=140, avg=59.43, stdev=21.28 00:47:01.478 lat (msec): min=7, max=140, avg=59.47, stdev=21.30 00:47:01.478 clat percentiles (msec): 00:47:01.478 | 1.00th=[ 13], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 45], 00:47:01.478 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 59], 60.00th=[ 61], 00:47:01.478 | 70.00th=[ 69], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 96], 00:47:01.478 | 99.00th=[ 134], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 140], 00:47:01.478 | 99.99th=[ 140] 00:47:01.478 bw ( KiB/s): min= 640, max= 1664, per=4.09%, avg=1068.00, stdev=241.70, samples=20 00:47:01.478 iops : min= 160, max= 416, avg=267.00, stdev=60.42, samples=20 00:47:01.478 lat (msec) : 10=0.67%, 20=2.31%, 50=32.99%, 100=60.39%, 250=3.65% 00:47:01.478 cpu : usr=34.85%, sys=1.05%, ctx=986, majf=0, minf=9 00:47:01.478 IO depths : 1=2.0%, 2=4.4%, 4=14.6%, 8=68.1%, 16=11.0%, 32=0.0%, >=64=0.0% 00:47:01.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 complete : 0=0.0%, 4=90.7%, 8=4.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 issued rwts: total=2686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.478 filename2: (groupid=0, jobs=1): err= 0: pid=109340: Wed Nov 20 05:58:19 2024 00:47:01.478 read: IOPS=252, BW=1009KiB/s (1033kB/s)(9.86MiB/10004msec) 00:47:01.478 slat (usec): min=4, max=8028, avg=25.67, stdev=309.58 00:47:01.478 clat (msec): min=18, max=143, avg=63.20, stdev=19.76 00:47:01.478 lat (msec): min=18, max=143, avg=63.23, stdev=19.76 00:47:01.478 clat percentiles (msec): 00:47:01.478 | 1.00th=[ 28], 5.00th=[ 36], 10.00th=[ 44], 20.00th=[ 48], 00:47:01.478 | 30.00th=[ 53], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 63], 00:47:01.478 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 87], 95.00th=[ 96], 00:47:01.478 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:47:01.478 | 99.99th=[ 144] 00:47:01.478 bw ( KiB/s): min= 640, max= 1360, per=3.83%, avg=1001.26, stdev=189.04, samples=19 00:47:01.478 iops : min= 160, max= 340, avg=250.32, stdev=47.26, samples=19 00:47:01.478 lat (msec) : 20=0.63%, 50=25.08%, 100=69.85%, 250=4.44% 00:47:01.478 cpu : usr=37.19%, sys=1.19%, ctx=1035, majf=0, minf=9 00:47:01.478 IO depths : 1=2.5%, 2=5.7%, 4=15.9%, 8=65.2%, 16=10.7%, 32=0.0%, >=64=0.0% 00:47:01.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 complete : 0=0.0%, 4=91.7%, 8=3.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 issued rwts: total=2524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.478 filename2: (groupid=0, jobs=1): err= 0: pid=109341: Wed Nov 20 05:58:19 2024 00:47:01.478 read: IOPS=266, BW=1064KiB/s (1090kB/s)(10.4MiB/10016msec) 00:47:01.478 slat (nsec): min=4801, max=77976, avg=12557.52, stdev=10175.75 00:47:01.478 clat (msec): min=23, max=147, avg=60.03, stdev=19.47 00:47:01.478 lat (msec): min=23, max=147, avg=60.04, stdev=19.47 00:47:01.478 clat percentiles (msec): 00:47:01.478 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 46], 00:47:01.478 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 59], 60.00th=[ 63], 00:47:01.478 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 95], 00:47:01.478 | 99.00th=[ 131], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 148], 00:47:01.478 | 99.99th=[ 148] 00:47:01.478 bw ( KiB/s): min= 768, max= 1376, per=4.07%, avg=1064.20, stdev=166.47, samples=20 00:47:01.478 iops : min= 192, max= 344, avg=266.00, stdev=41.58, samples=20 00:47:01.478 lat (msec) : 50=37.49%, 100=58.80%, 250=3.71% 00:47:01.478 cpu : usr=32.26%, sys=0.86%, ctx=933, majf=0, minf=9 00:47:01.478 IO depths : 1=1.3%, 2=2.9%, 4=10.8%, 8=73.2%, 16=11.9%, 32=0.0%, >=64=0.0% 00:47:01.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 issued rwts: total=2665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.478 filename2: (groupid=0, jobs=1): err= 0: pid=109342: Wed Nov 20 05:58:19 2024 00:47:01.478 read: IOPS=257, BW=1030KiB/s (1055kB/s)(10.1MiB/10026msec) 00:47:01.478 slat (usec): min=4, max=8030, avg=21.29, stdev=273.21 00:47:01.478 clat (msec): min=14, max=161, avg=61.93, stdev=21.66 00:47:01.478 lat (msec): min=14, max=161, avg=61.95, stdev=21.66 00:47:01.478 clat percentiles (msec): 00:47:01.478 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 46], 00:47:01.478 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 67], 00:47:01.478 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 96], 00:47:01.478 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 161], 99.95th=[ 161], 00:47:01.478 | 99.99th=[ 161] 00:47:01.478 bw ( KiB/s): min= 728, max= 1523, per=3.93%, avg=1027.50, stdev=191.86, samples=20 00:47:01.478 iops : min= 182, max= 380, avg=256.80, stdev=47.91, samples=20 00:47:01.478 lat (msec) : 20=1.86%, 50=29.74%, 100=65.38%, 250=3.02% 00:47:01.478 cpu : usr=36.25%, sys=1.25%, ctx=963, majf=0, minf=9 00:47:01.478 IO depths : 1=2.4%, 2=5.2%, 4=15.1%, 8=66.6%, 16=10.8%, 32=0.0%, >=64=0.0% 00:47:01.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 complete : 0=0.0%, 4=91.0%, 8=3.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.478 issued rwts: total=2582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.478 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.478 filename2: (groupid=0, jobs=1): err= 0: pid=109343: Wed Nov 20 05:58:19 2024 00:47:01.478 read: IOPS=302, BW=1209KiB/s (1238kB/s)(11.8MiB/10011msec) 00:47:01.478 slat (usec): min=5, max=8023, avg=16.45, stdev=163.15 00:47:01.479 clat (msec): min=8, max=139, avg=52.80, stdev=19.84 00:47:01.479 lat (msec): min=8, max=139, avg=52.82, stdev=19.85 00:47:01.479 clat percentiles (msec): 00:47:01.479 | 1.00th=[ 15], 5.00th=[ 26], 10.00th=[ 33], 20.00th=[ 37], 00:47:01.479 | 30.00th=[ 41], 40.00th=[ 46], 50.00th=[ 50], 60.00th=[ 56], 00:47:01.479 | 70.00th=[ 61], 80.00th=[ 66], 90.00th=[ 78], 95.00th=[ 91], 00:47:01.479 | 99.00th=[ 118], 99.50th=[ 134], 99.90th=[ 140], 99.95th=[ 140], 00:47:01.479 | 99.99th=[ 140] 00:47:01.479 bw ( KiB/s): min= 896, max= 1632, per=4.62%, avg=1206.80, stdev=247.24, samples=20 00:47:01.479 iops : min= 224, max= 408, avg=301.70, stdev=61.81, samples=20 00:47:01.479 lat (msec) : 10=0.53%, 20=2.11%, 50=48.40%, 100=46.28%, 250=2.68% 00:47:01.479 cpu : usr=40.42%, sys=1.32%, ctx=1383, majf=0, minf=9 00:47:01.479 IO depths : 1=0.5%, 2=1.0%, 4=6.3%, 8=78.6%, 16=13.6%, 32=0.0%, >=64=0.0% 00:47:01.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.479 complete : 0=0.0%, 4=89.2%, 8=6.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.479 issued rwts: total=3027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.479 filename2: (groupid=0, jobs=1): err= 0: pid=109344: Wed Nov 20 05:58:19 2024 00:47:01.479 read: IOPS=240, BW=962KiB/s (985kB/s)(9624KiB/10007msec) 00:47:01.479 slat (usec): min=4, max=8033, avg=25.63, stdev=294.80 00:47:01.479 clat (msec): min=25, max=139, avg=66.38, stdev=18.84 00:47:01.479 lat (msec): min=25, max=139, avg=66.40, stdev=18.84 00:47:01.479 clat percentiles (msec): 00:47:01.479 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 50], 00:47:01.479 | 30.00th=[ 58], 40.00th=[ 60], 50.00th=[ 66], 60.00th=[ 71], 00:47:01.479 | 70.00th=[ 73], 80.00th=[ 83], 90.00th=[ 90], 95.00th=[ 95], 00:47:01.479 | 99.00th=[ 129], 99.50th=[ 130], 99.90th=[ 140], 99.95th=[ 140], 00:47:01.479 | 99.99th=[ 140] 00:47:01.479 bw ( KiB/s): min= 768, max= 1152, per=3.64%, avg=952.42, stdev=123.20, samples=19 00:47:01.479 iops : min= 192, max= 288, avg=238.11, stdev=30.80, samples=19 00:47:01.479 lat (msec) : 50=20.57%, 100=75.15%, 250=4.28% 00:47:01.479 cpu : usr=32.19%, sys=0.91%, ctx=921, majf=0, minf=9 00:47:01.479 IO depths : 1=2.4%, 2=5.1%, 4=14.8%, 8=67.2%, 16=10.5%, 32=0.0%, >=64=0.0% 00:47:01.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.479 complete : 0=0.0%, 4=91.1%, 8=3.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:01.479 issued rwts: total=2406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:01.479 latency : target=0, window=0, percentile=100.00%, depth=16 00:47:01.479 00:47:01.479 Run status group 0 (all jobs): 00:47:01.479 READ: bw=25.5MiB/s (26.8MB/s), 962KiB/s-1326KiB/s (985kB/s-1358kB/s), io=256MiB (269MB), run=10001-10042msec 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.479 bdev_null0 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.479 [2024-11-20 05:58:20.148801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.479 bdev_null1 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:47:01.479 05:58:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:47:01.480 { 00:47:01.480 "params": { 00:47:01.480 "name": "Nvme$subsystem", 00:47:01.480 "trtype": "$TEST_TRANSPORT", 00:47:01.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:01.480 "adrfam": "ipv4", 00:47:01.480 "trsvcid": "$NVMF_PORT", 00:47:01.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:01.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:01.480 "hdgst": ${hdgst:-false}, 00:47:01.480 "ddgst": ${ddgst:-false} 00:47:01.480 }, 00:47:01.480 "method": "bdev_nvme_attach_controller" 00:47:01.480 } 00:47:01.480 EOF 00:47:01.480 )") 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:47:01.480 { 00:47:01.480 "params": { 00:47:01.480 "name": "Nvme$subsystem", 00:47:01.480 "trtype": "$TEST_TRANSPORT", 00:47:01.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:01.480 "adrfam": "ipv4", 00:47:01.480 "trsvcid": "$NVMF_PORT", 00:47:01.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:01.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:01.480 "hdgst": ${hdgst:-false}, 00:47:01.480 "ddgst": ${ddgst:-false} 00:47:01.480 }, 00:47:01.480 "method": "bdev_nvme_attach_controller" 00:47:01.480 } 00:47:01.480 EOF 00:47:01.480 )") 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:47:01.480 "params": { 00:47:01.480 "name": "Nvme0", 00:47:01.480 "trtype": "tcp", 00:47:01.480 "traddr": "10.0.0.3", 00:47:01.480 "adrfam": "ipv4", 00:47:01.480 "trsvcid": "4420", 00:47:01.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:01.480 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:01.480 "hdgst": false, 00:47:01.480 "ddgst": false 00:47:01.480 }, 00:47:01.480 "method": "bdev_nvme_attach_controller" 00:47:01.480 },{ 00:47:01.480 "params": { 00:47:01.480 "name": "Nvme1", 00:47:01.480 "trtype": "tcp", 00:47:01.480 "traddr": "10.0.0.3", 00:47:01.480 "adrfam": "ipv4", 00:47:01.480 "trsvcid": "4420", 00:47:01.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:47:01.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:47:01.480 "hdgst": false, 00:47:01.480 "ddgst": false 00:47:01.480 }, 00:47:01.480 "method": "bdev_nvme_attach_controller" 00:47:01.480 }' 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:47:01.480 05:58:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:01.480 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:47:01.480 ... 00:47:01.480 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:47:01.480 ... 00:47:01.480 fio-3.35 00:47:01.480 Starting 4 threads 00:47:06.772 00:47:06.772 filename0: (groupid=0, jobs=1): err= 0: pid=109480: Wed Nov 20 05:58:26 2024 00:47:06.772 read: IOPS=2423, BW=18.9MiB/s (19.9MB/s)(94.7MiB/5001msec) 00:47:06.772 slat (usec): min=5, max=601, avg=17.61, stdev=12.95 00:47:06.772 clat (usec): min=1789, max=9255, avg=3212.48, stdev=476.06 00:47:06.772 lat (usec): min=1801, max=9270, avg=3230.09, stdev=475.21 00:47:06.772 clat percentiles (usec): 00:47:06.772 | 1.00th=[ 2900], 5.00th=[ 2966], 10.00th=[ 3032], 20.00th=[ 3064], 00:47:06.772 | 30.00th=[ 3097], 40.00th=[ 3097], 50.00th=[ 3130], 60.00th=[ 3163], 00:47:06.772 | 70.00th=[ 3163], 80.00th=[ 3195], 90.00th=[ 3261], 95.00th=[ 3425], 00:47:06.772 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 8979], 99.95th=[ 9110], 00:47:06.772 | 99.99th=[ 9241] 00:47:06.772 bw ( KiB/s): min=17536, max=20096, per=25.39%, avg=19702.11, stdev=820.62, samples=9 00:47:06.772 iops : min= 2192, max= 2512, avg=2462.67, stdev=102.57, samples=9 00:47:06.772 lat (msec) : 2=0.02%, 4=95.91%, 10=4.08% 00:47:06.772 cpu : usr=93.48%, sys=5.06%, ctx=21, majf=0, minf=9 00:47:06.772 IO depths : 1=10.3%, 2=24.9%, 4=50.1%, 8=14.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:06.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.772 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.772 issued rwts: total=12120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.772 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:06.772 filename0: (groupid=0, jobs=1): err= 0: pid=109481: Wed Nov 20 05:58:26 2024 00:47:06.772 read: IOPS=2430, BW=19.0MiB/s (19.9MB/s)(95.0MiB/5002msec) 00:47:06.772 slat (nsec): min=5119, max=72795, avg=10240.54, stdev=8087.93 00:47:06.772 clat (usec): min=776, max=9172, avg=3241.08, stdev=481.31 00:47:06.772 lat (usec): min=782, max=9188, avg=3251.32, stdev=480.82 00:47:06.772 clat percentiles (usec): 00:47:06.773 | 1.00th=[ 2900], 5.00th=[ 3064], 10.00th=[ 3097], 20.00th=[ 3130], 00:47:06.773 | 30.00th=[ 3130], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3163], 00:47:06.773 | 70.00th=[ 3195], 80.00th=[ 3195], 90.00th=[ 3261], 95.00th=[ 3392], 00:47:06.773 | 99.00th=[ 5211], 99.50th=[ 5342], 99.90th=[ 8717], 99.95th=[ 9110], 00:47:06.773 | 99.99th=[ 9110] 00:47:06.773 bw ( KiB/s): min=17792, max=20224, per=25.48%, avg=19774.22, stdev=751.17, samples=9 00:47:06.773 iops : min= 2224, max= 2528, avg=2471.78, stdev=93.90, samples=9 00:47:06.773 lat (usec) : 1000=0.26% 00:47:06.773 lat (msec) : 2=0.07%, 4=95.64%, 10=4.03% 00:47:06.773 cpu : usr=92.92%, sys=5.82%, ctx=7, majf=0, minf=9 00:47:06.773 IO depths : 1=11.8%, 2=24.8%, 4=50.1%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:06.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.773 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.773 issued rwts: total=12155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.773 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:06.773 filename1: (groupid=0, jobs=1): err= 0: pid=109482: Wed Nov 20 05:58:26 2024 00:47:06.773 read: IOPS=2425, BW=18.9MiB/s (19.9MB/s)(94.8MiB/5001msec) 00:47:06.773 slat (nsec): min=5754, max=67538, avg=11846.44, stdev=8325.87 00:47:06.773 clat (usec): min=1422, max=10749, avg=3247.18, stdev=471.51 00:47:06.773 lat (usec): min=1429, max=10758, avg=3259.03, stdev=470.68 00:47:06.773 clat percentiles (usec): 00:47:06.773 | 1.00th=[ 2966], 5.00th=[ 3032], 10.00th=[ 3064], 20.00th=[ 3097], 00:47:06.773 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3163], 60.00th=[ 3163], 00:47:06.773 | 70.00th=[ 3195], 80.00th=[ 3228], 90.00th=[ 3294], 95.00th=[ 3425], 00:47:06.773 | 99.00th=[ 5276], 99.50th=[ 5276], 99.90th=[ 8848], 99.95th=[ 9241], 00:47:06.773 | 99.99th=[10421] 00:47:06.773 bw ( KiB/s): min=17536, max=20096, per=25.39%, avg=19702.11, stdev=819.16, samples=9 00:47:06.773 iops : min= 2192, max= 2512, avg=2462.67, stdev=102.37, samples=9 00:47:06.773 lat (msec) : 2=0.07%, 4=95.94%, 10=3.97%, 20=0.02% 00:47:06.773 cpu : usr=93.58%, sys=5.44%, ctx=14, majf=0, minf=10 00:47:06.773 IO depths : 1=11.8%, 2=24.9%, 4=50.1%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:06.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.773 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.773 issued rwts: total=12128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.773 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:06.773 filename1: (groupid=0, jobs=1): err= 0: pid=109483: Wed Nov 20 05:58:26 2024 00:47:06.773 read: IOPS=2424, BW=18.9MiB/s (19.9MB/s)(94.8MiB/5003msec) 00:47:06.773 slat (usec): min=3, max=110, avg=19.03, stdev=12.79 00:47:06.773 clat (usec): min=1792, max=12697, avg=3199.49, stdev=491.07 00:47:06.773 lat (usec): min=1814, max=12704, avg=3218.52, stdev=490.42 00:47:06.773 clat percentiles (usec): 00:47:06.773 | 1.00th=[ 2868], 5.00th=[ 2966], 10.00th=[ 2999], 20.00th=[ 3032], 00:47:06.773 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3097], 60.00th=[ 3130], 00:47:06.773 | 70.00th=[ 3163], 80.00th=[ 3195], 90.00th=[ 3228], 95.00th=[ 3392], 00:47:06.773 | 99.00th=[ 5211], 99.50th=[ 5276], 99.90th=[ 8848], 99.95th=[ 9110], 00:47:06.773 | 99.99th=[10683] 00:47:06.773 bw ( KiB/s): min=17536, max=20096, per=25.39%, avg=19707.56, stdev=820.37, samples=9 00:47:06.773 iops : min= 2192, max= 2512, avg=2463.44, stdev=102.55, samples=9 00:47:06.773 lat (msec) : 2=0.03%, 4=95.89%, 10=4.06%, 20=0.02% 00:47:06.773 cpu : usr=94.58%, sys=4.26%, ctx=28, majf=0, minf=9 00:47:06.773 IO depths : 1=10.1%, 2=25.0%, 4=50.0%, 8=14.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:06.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.773 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:06.773 issued rwts: total=12128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:06.773 latency : target=0, window=0, percentile=100.00%, depth=8 00:47:06.773 00:47:06.773 Run status group 0 (all jobs): 00:47:06.773 READ: bw=75.8MiB/s (79.5MB/s), 18.9MiB/s-19.0MiB/s (19.9MB/s-19.9MB/s), io=379MiB (398MB), run=5001-5003msec 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.773 ************************************ 00:47:06.773 END TEST fio_dif_rand_params 00:47:06.773 ************************************ 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:06.773 00:47:06.773 real 0m24.118s 00:47:06.773 user 2m6.284s 00:47:06.773 sys 0m5.978s 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:47:06.773 05:58:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:47:06.773 05:58:26 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:47:06.773 05:58:26 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:47:06.773 05:58:26 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:47:06.773 05:58:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:06.773 ************************************ 00:47:06.773 START TEST fio_dif_digest 00:47:06.773 ************************************ 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:06.773 bdev_null0 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:06.773 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:06.774 [2024-11-20 05:58:26.454291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:47:06.774 { 00:47:06.774 "params": { 00:47:06.774 "name": "Nvme$subsystem", 00:47:06.774 "trtype": "$TEST_TRANSPORT", 00:47:06.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:06.774 "adrfam": "ipv4", 00:47:06.774 "trsvcid": "$NVMF_PORT", 00:47:06.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:06.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:06.774 "hdgst": ${hdgst:-false}, 00:47:06.774 "ddgst": ${ddgst:-false} 00:47:06.774 }, 00:47:06.774 "method": "bdev_nvme_attach_controller" 00:47:06.774 } 00:47:06.774 EOF 00:47:06.774 )") 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:47:06.774 "params": { 00:47:06.774 "name": "Nvme0", 00:47:06.774 "trtype": "tcp", 00:47:06.774 "traddr": "10.0.0.3", 00:47:06.774 "adrfam": "ipv4", 00:47:06.774 "trsvcid": "4420", 00:47:06.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:06.774 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:06.774 "hdgst": true, 00:47:06.774 "ddgst": true 00:47:06.774 }, 00:47:06.774 "method": "bdev_nvme_attach_controller" 00:47:06.774 }' 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:47:06.774 05:58:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:06.774 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:47:06.774 ... 00:47:06.774 fio-3.35 00:47:06.774 Starting 3 threads 00:47:18.973 00:47:18.973 filename0: (groupid=0, jobs=1): err= 0: pid=109589: Wed Nov 20 05:58:37 2024 00:47:18.973 read: IOPS=227, BW=28.5MiB/s (29.8MB/s)(285MiB/10004msec) 00:47:18.973 slat (nsec): min=6593, max=54742, avg=16472.49, stdev=5760.83 00:47:18.973 clat (usec): min=7153, max=24099, avg=13154.29, stdev=2057.28 00:47:18.973 lat (usec): min=7172, max=24126, avg=13170.77, stdev=2057.92 00:47:18.973 clat percentiles (usec): 00:47:18.973 | 1.00th=[ 7832], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[12518], 00:47:18.973 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:47:18.973 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15139], 95.00th=[15664], 00:47:18.973 | 99.00th=[16581], 99.50th=[16909], 99.90th=[23987], 99.95th=[23987], 00:47:18.973 | 99.99th=[23987] 00:47:18.973 bw ( KiB/s): min=25856, max=33280, per=29.82%, avg=29305.26, stdev=2045.10, samples=19 00:47:18.973 iops : min= 202, max= 260, avg=228.95, stdev=15.98, samples=19 00:47:18.973 lat (msec) : 10=12.77%, 20=87.09%, 50=0.13% 00:47:18.973 cpu : usr=92.51%, sys=6.29%, ctx=8, majf=0, minf=9 00:47:18.973 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:18.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:18.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:18.973 issued rwts: total=2278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:18.973 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:18.973 filename0: (groupid=0, jobs=1): err= 0: pid=109590: Wed Nov 20 05:58:37 2024 00:47:18.973 read: IOPS=272, BW=34.0MiB/s (35.7MB/s)(340MiB/10009msec) 00:47:18.973 slat (nsec): min=3954, max=71508, avg=16350.60, stdev=6650.64 00:47:18.973 clat (usec): min=7427, max=52196, avg=11006.14, stdev=6297.83 00:47:18.973 lat (usec): min=7448, max=52212, avg=11022.49, stdev=6297.59 00:47:18.973 clat percentiles (usec): 00:47:18.973 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 9372], 00:47:18.973 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:47:18.973 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11207], 95.00th=[11731], 00:47:18.973 | 99.00th=[50594], 99.50th=[51643], 99.90th=[51643], 99.95th=[52167], 00:47:18.973 | 99.99th=[52167] 00:47:18.973 bw ( KiB/s): min=28416, max=39936, per=35.44%, avg=34828.80, stdev=3330.05, samples=20 00:47:18.973 iops : min= 222, max= 312, avg=272.10, stdev=26.02, samples=20 00:47:18.973 lat (msec) : 10=49.94%, 20=47.63%, 50=0.62%, 100=1.80% 00:47:18.973 cpu : usr=90.79%, sys=7.61%, ctx=10, majf=0, minf=9 00:47:18.973 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:18.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:18.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:18.973 issued rwts: total=2723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:18.973 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:18.973 filename0: (groupid=0, jobs=1): err= 0: pid=109591: Wed Nov 20 05:58:37 2024 00:47:18.973 read: IOPS=268, BW=33.5MiB/s (35.2MB/s)(335MiB/10004msec) 00:47:18.973 slat (nsec): min=6207, max=57962, avg=16228.56, stdev=7424.38 00:47:18.973 clat (usec): min=5393, max=22809, avg=11163.49, stdev=1793.41 00:47:18.973 lat (usec): min=5400, max=22836, avg=11179.72, stdev=1793.38 00:47:18.973 clat percentiles (usec): 00:47:18.973 | 1.00th=[ 6587], 5.00th=[ 7111], 10.00th=[ 7635], 20.00th=[10421], 00:47:18.973 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:47:18.973 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:47:18.973 | 99.00th=[13960], 99.50th=[14353], 99.90th=[22676], 99.95th=[22676], 00:47:18.973 | 99.99th=[22938] 00:47:18.973 bw ( KiB/s): min=30464, max=39424, per=35.02%, avg=34411.79, stdev=2334.32, samples=19 00:47:18.973 iops : min= 238, max= 308, avg=268.84, stdev=18.24, samples=19 00:47:18.973 lat (msec) : 10=16.47%, 20=83.41%, 50=0.11% 00:47:18.973 cpu : usr=91.88%, sys=6.52%, ctx=65, majf=0, minf=0 00:47:18.973 IO depths : 1=3.0%, 2=97.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:18.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:18.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:18.973 issued rwts: total=2683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:18.973 latency : target=0, window=0, percentile=100.00%, depth=3 00:47:18.973 00:47:18.973 Run status group 0 (all jobs): 00:47:18.973 READ: bw=96.0MiB/s (101MB/s), 28.5MiB/s-34.0MiB/s (29.8MB/s-35.7MB/s), io=961MiB (1007MB), run=10004-10009msec 00:47:18.973 05:58:37 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:47:18.973 05:58:37 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:47:18.973 05:58:37 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:47:18.973 05:58:37 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:47:18.973 05:58:37 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:47:18.973 05:58:37 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:47:18.973 05:58:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:18.973 05:58:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:18.973 05:58:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:18.973 05:58:37 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:47:18.973 05:58:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:18.973 05:58:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:18.973 ************************************ 00:47:18.973 END TEST fio_dif_digest 00:47:18.973 ************************************ 00:47:18.973 05:58:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:18.973 00:47:18.973 real 0m11.005s 00:47:18.973 user 0m28.182s 00:47:18.973 sys 0m2.329s 00:47:18.973 05:58:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:47:18.973 05:58:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:47:18.973 05:58:37 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:47:18.973 05:58:37 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:47:18.973 05:58:37 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:18.973 05:58:37 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:47:18.973 05:58:37 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:18.973 05:58:37 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:47:18.973 05:58:37 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:18.973 05:58:37 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:18.973 rmmod nvme_tcp 00:47:18.973 rmmod nvme_fabrics 00:47:18.973 rmmod nvme_keyring 00:47:18.973 05:58:37 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:18.973 05:58:37 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:47:18.973 05:58:37 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:47:18.973 05:58:37 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 108821 ']' 00:47:18.973 05:58:37 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 108821 00:47:18.973 05:58:37 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 108821 ']' 00:47:18.973 05:58:37 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 108821 00:47:18.973 05:58:37 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:47:18.973 05:58:37 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:47:18.973 05:58:37 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 108821 00:47:18.973 killing process with pid 108821 00:47:18.973 05:58:37 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:47:18.973 05:58:37 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:47:18.973 05:58:37 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 108821' 00:47:18.973 05:58:37 nvmf_dif -- common/autotest_common.sh@971 -- # kill 108821 00:47:18.973 05:58:37 nvmf_dif -- common/autotest_common.sh@976 -- # wait 108821 00:47:18.973 05:58:37 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:47:18.973 05:58:37 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:47:18.973 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:47:18.973 Waiting for block devices as requested 00:47:18.973 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:47:18.973 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:47:18.973 05:58:38 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:18.973 05:58:38 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:18.973 05:58:38 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:47:18.973 05:58:38 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:47:18.973 05:58:38 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:47:18.973 05:58:38 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:18.973 05:58:38 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:18.973 05:58:38 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:18.973 05:58:38 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:18.973 05:58:38 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:18.973 05:58:38 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:18.973 05:58:38 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:18.973 05:58:38 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:18.973 05:58:38 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:18.974 05:58:38 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:18.974 05:58:38 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:18.974 05:58:38 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:18.974 05:58:38 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:18.974 05:58:38 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:18.974 05:58:38 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:18.974 05:58:38 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:18.974 05:58:38 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:18.974 05:58:38 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:18.974 05:58:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:18.974 05:58:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:19.233 05:58:38 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:47:19.233 00:47:19.233 real 1m1.774s 00:47:19.233 user 3m50.661s 00:47:19.233 sys 0m19.333s 00:47:19.233 05:58:38 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:47:19.233 ************************************ 00:47:19.233 END TEST nvmf_dif 00:47:19.233 ************************************ 00:47:19.233 05:58:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:19.233 05:58:38 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:47:19.233 05:58:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:47:19.233 05:58:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:47:19.233 05:58:38 -- common/autotest_common.sh@10 -- # set +x 00:47:19.233 ************************************ 00:47:19.233 START TEST nvmf_abort_qd_sizes 00:47:19.233 ************************************ 00:47:19.233 05:58:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:47:19.233 * Looking for test storage... 00:47:19.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:47:19.233 05:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:47:19.233 05:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:47:19.233 05:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:47:19.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:19.493 --rc genhtml_branch_coverage=1 00:47:19.493 --rc genhtml_function_coverage=1 00:47:19.493 --rc genhtml_legend=1 00:47:19.493 --rc geninfo_all_blocks=1 00:47:19.493 --rc geninfo_unexecuted_blocks=1 00:47:19.493 00:47:19.493 ' 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:47:19.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:19.493 --rc genhtml_branch_coverage=1 00:47:19.493 --rc genhtml_function_coverage=1 00:47:19.493 --rc genhtml_legend=1 00:47:19.493 --rc geninfo_all_blocks=1 00:47:19.493 --rc geninfo_unexecuted_blocks=1 00:47:19.493 00:47:19.493 ' 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:47:19.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:19.493 --rc genhtml_branch_coverage=1 00:47:19.493 --rc genhtml_function_coverage=1 00:47:19.493 --rc genhtml_legend=1 00:47:19.493 --rc geninfo_all_blocks=1 00:47:19.493 --rc geninfo_unexecuted_blocks=1 00:47:19.493 00:47:19.493 ' 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:47:19.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:19.493 --rc genhtml_branch_coverage=1 00:47:19.493 --rc genhtml_function_coverage=1 00:47:19.493 --rc genhtml_legend=1 00:47:19.493 --rc geninfo_all_blocks=1 00:47:19.493 --rc geninfo_unexecuted_blocks=1 00:47:19.493 00:47:19.493 ' 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:19.493 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:19.494 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:47:19.494 Cannot find device "nvmf_init_br" 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:47:19.494 Cannot find device "nvmf_init_br2" 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:47:19.494 Cannot find device "nvmf_tgt_br" 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:47:19.494 Cannot find device "nvmf_tgt_br2" 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:47:19.494 Cannot find device "nvmf_init_br" 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:47:19.494 Cannot find device "nvmf_init_br2" 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:47:19.494 Cannot find device "nvmf_tgt_br" 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:47:19.494 Cannot find device "nvmf_tgt_br2" 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:47:19.494 Cannot find device "nvmf_br" 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:47:19.494 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:47:19.754 Cannot find device "nvmf_init_if" 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:47:19.754 Cannot find device "nvmf_init_if2" 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:19.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:19.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:47:19.754 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:47:20.013 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:20.013 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.162 ms 00:47:20.013 00:47:20.013 --- 10.0.0.3 ping statistics --- 00:47:20.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:20.013 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:47:20.013 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:47:20.013 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:47:20.013 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:47:20.013 00:47:20.013 --- 10.0.0.4 ping statistics --- 00:47:20.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:20.013 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:47:20.013 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:20.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:20.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:47:20.013 00:47:20.013 --- 10.0.0.1 ping statistics --- 00:47:20.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:20.013 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:47:20.013 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:47:20.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:20.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:47:20.013 00:47:20.013 --- 10.0.0.2 ping statistics --- 00:47:20.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:20.013 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:47:20.013 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:20.013 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:47:20.013 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:47:20.014 05:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:47:20.581 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:47:20.840 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:47:20.840 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=110244 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 110244 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 110244 ']' 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:20.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:47:20.840 05:58:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:21.098 [2024-11-20 05:58:40.820233] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:47:21.098 [2024-11-20 05:58:40.820379] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:21.098 [2024-11-20 05:58:40.980506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:21.357 [2024-11-20 05:58:41.053039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:21.357 [2024-11-20 05:58:41.053108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:21.357 [2024-11-20 05:58:41.053123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:21.357 [2024-11-20 05:58:41.053136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:21.357 [2024-11-20 05:58:41.053147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:21.357 [2024-11-20 05:58:41.054501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:21.357 [2024-11-20 05:58:41.055322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:47:21.357 [2024-11-20 05:58:41.055425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:47:21.357 [2024-11-20 05:58:41.055431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:47:22.291 05:58:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:47:22.292 05:58:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:47:22.292 05:58:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:47:22.292 05:58:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:22.292 ************************************ 00:47:22.292 START TEST spdk_target_abort 00:47:22.292 ************************************ 00:47:22.292 05:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:47:22.292 05:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:47:22.292 05:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:47:22.292 05:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:22.292 05:58:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:22.292 spdk_targetn1 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:22.292 [2024-11-20 05:58:42.053832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:22.292 [2024-11-20 05:58:42.097737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:22.292 05:58:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:25.579 Initializing NVMe Controllers 00:47:25.579 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:47:25.579 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:25.579 Initialization complete. Launching workers. 00:47:25.579 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13001, failed: 0 00:47:25.579 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1108, failed to submit 11893 00:47:25.579 success 786, unsuccessful 322, failed 0 00:47:25.579 05:58:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:25.579 05:58:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:28.886 Initializing NVMe Controllers 00:47:28.886 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:47:28.886 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:28.886 Initialization complete. Launching workers. 00:47:28.886 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5997, failed: 0 00:47:28.886 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1278, failed to submit 4719 00:47:28.886 success 297, unsuccessful 981, failed 0 00:47:28.886 05:58:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:28.886 05:58:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:32.165 Initializing NVMe Controllers 00:47:32.165 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:47:32.165 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:32.165 Initialization complete. Launching workers. 00:47:32.165 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31433, failed: 0 00:47:32.165 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2750, failed to submit 28683 00:47:32.165 success 425, unsuccessful 2325, failed 0 00:47:32.165 05:58:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:47:32.165 05:58:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:32.165 05:58:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:32.165 05:58:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:32.165 05:58:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:47:32.165 05:58:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:32.165 05:58:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:33.538 05:58:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 110244 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 110244 ']' 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 110244 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 110244 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:47:33.539 killing process with pid 110244 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 110244' 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 110244 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 110244 00:47:33.539 ************************************ 00:47:33.539 END TEST spdk_target_abort 00:47:33.539 ************************************ 00:47:33.539 00:47:33.539 real 0m11.289s 00:47:33.539 user 0m45.032s 00:47:33.539 sys 0m2.356s 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:33.539 05:58:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:47:33.539 05:58:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:47:33.539 05:58:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:47:33.539 05:58:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:33.539 ************************************ 00:47:33.539 START TEST kernel_target_abort 00:47:33.539 ************************************ 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:47:33.539 05:58:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:47:33.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:47:34.056 Waiting for block devices as requested 00:47:34.056 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:47:34.056 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:47:34.314 No valid GPT data, bailing 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:47:34.314 No valid GPT data, bailing 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:47:34.314 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:47:34.315 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:47:34.315 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:47:34.573 No valid GPT data, bailing 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:47:34.573 No valid GPT data, bailing 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 --hostid=9b54f445-f067-44f7-9ac3-ecdb7acf7203 -a 10.0.0.1 -t tcp -s 4420 00:47:34.573 00:47:34.573 Discovery Log Number of Records 2, Generation counter 2 00:47:34.573 =====Discovery Log Entry 0====== 00:47:34.573 trtype: tcp 00:47:34.573 adrfam: ipv4 00:47:34.573 subtype: current discovery subsystem 00:47:34.573 treq: not specified, sq flow control disable supported 00:47:34.573 portid: 1 00:47:34.573 trsvcid: 4420 00:47:34.573 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:47:34.573 traddr: 10.0.0.1 00:47:34.573 eflags: none 00:47:34.573 sectype: none 00:47:34.573 =====Discovery Log Entry 1====== 00:47:34.573 trtype: tcp 00:47:34.573 adrfam: ipv4 00:47:34.573 subtype: nvme subsystem 00:47:34.573 treq: not specified, sq flow control disable supported 00:47:34.573 portid: 1 00:47:34.573 trsvcid: 4420 00:47:34.573 subnqn: nqn.2016-06.io.spdk:testnqn 00:47:34.573 traddr: 10.0.0.1 00:47:34.573 eflags: none 00:47:34.573 sectype: none 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:34.573 05:58:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:37.854 Initializing NVMe Controllers 00:47:37.854 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:47:37.854 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:37.854 Initialization complete. Launching workers. 00:47:37.854 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41006, failed: 0 00:47:37.854 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 41006, failed to submit 0 00:47:37.854 success 0, unsuccessful 41006, failed 0 00:47:37.854 05:58:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:37.854 05:58:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:41.202 Initializing NVMe Controllers 00:47:41.202 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:47:41.202 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:41.202 Initialization complete. Launching workers. 00:47:41.202 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78658, failed: 0 00:47:41.202 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34038, failed to submit 44620 00:47:41.202 success 0, unsuccessful 34038, failed 0 00:47:41.202 05:59:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:47:41.202 05:59:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:47:44.488 Initializing NVMe Controllers 00:47:44.489 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:47:44.489 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:47:44.489 Initialization complete. Launching workers. 00:47:44.489 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93556, failed: 0 00:47:44.489 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23350, failed to submit 70206 00:47:44.489 success 0, unsuccessful 23350, failed 0 00:47:44.489 05:59:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:47:44.489 05:59:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:47:44.489 05:59:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:47:44.489 05:59:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:44.489 05:59:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:47:44.489 05:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:47:44.489 05:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:47:44.489 05:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:47:44.489 05:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:47:44.489 05:59:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:47:45.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:47:47.594 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:47:47.594 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:47:47.594 00:47:47.594 real 0m13.806s 00:47:47.594 user 0m6.458s 00:47:47.594 sys 0m4.764s 00:47:47.594 05:59:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:47:47.594 05:59:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:47:47.594 ************************************ 00:47:47.594 END TEST kernel_target_abort 00:47:47.594 ************************************ 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:47.594 rmmod nvme_tcp 00:47:47.594 rmmod nvme_fabrics 00:47:47.594 rmmod nvme_keyring 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:47:47.594 Process with pid 110244 is not found 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 110244 ']' 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 110244 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 110244 ']' 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 110244 00:47:47.594 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (110244) - No such process 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 110244 is not found' 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:47:47.594 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:47:47.854 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:47:47.854 Waiting for block devices as requested 00:47:47.854 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:47:48.113 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:47:48.113 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:47:48.113 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:47:48.113 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:47:48.113 05:59:07 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:47:48.113 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:47:48.113 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:47:48.113 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:48.113 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:47:48.113 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:47:48.113 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:47:48.371 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:47:48.371 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:47:48.371 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:47:48.371 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:47:48.371 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:47:48.371 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:47:48.372 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:47:48.372 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:47:48.372 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:47:48.372 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:48.372 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:48.372 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:47:48.372 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:48.372 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:48.372 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:48.372 05:59:08 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:47:48.372 00:47:48.372 real 0m29.269s 00:47:48.372 user 0m52.978s 00:47:48.372 sys 0m8.960s 00:47:48.372 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:47:48.372 05:59:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:47:48.372 ************************************ 00:47:48.372 END TEST nvmf_abort_qd_sizes 00:47:48.372 ************************************ 00:47:48.631 05:59:08 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:47:48.631 05:59:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:47:48.631 05:59:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:47:48.631 05:59:08 -- common/autotest_common.sh@10 -- # set +x 00:47:48.631 ************************************ 00:47:48.631 START TEST keyring_file 00:47:48.631 ************************************ 00:47:48.631 05:59:08 keyring_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:47:48.631 * Looking for test storage... 00:47:48.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:47:48.631 05:59:08 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:47:48.631 05:59:08 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:47:48.631 05:59:08 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:47:48.631 05:59:08 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@345 -- # : 1 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@353 -- # local d=1 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@355 -- # echo 1 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@353 -- # local d=2 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@355 -- # echo 2 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:48.631 05:59:08 keyring_file -- scripts/common.sh@368 -- # return 0 00:47:48.631 05:59:08 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:48.631 05:59:08 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:47:48.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:48.631 --rc genhtml_branch_coverage=1 00:47:48.631 --rc genhtml_function_coverage=1 00:47:48.631 --rc genhtml_legend=1 00:47:48.631 --rc geninfo_all_blocks=1 00:47:48.632 --rc geninfo_unexecuted_blocks=1 00:47:48.632 00:47:48.632 ' 00:47:48.632 05:59:08 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:47:48.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:48.632 --rc genhtml_branch_coverage=1 00:47:48.632 --rc genhtml_function_coverage=1 00:47:48.632 --rc genhtml_legend=1 00:47:48.632 --rc geninfo_all_blocks=1 00:47:48.632 --rc geninfo_unexecuted_blocks=1 00:47:48.632 00:47:48.632 ' 00:47:48.632 05:59:08 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:47:48.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:48.632 --rc genhtml_branch_coverage=1 00:47:48.632 --rc genhtml_function_coverage=1 00:47:48.632 --rc genhtml_legend=1 00:47:48.632 --rc geninfo_all_blocks=1 00:47:48.632 --rc geninfo_unexecuted_blocks=1 00:47:48.632 00:47:48.632 ' 00:47:48.632 05:59:08 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:47:48.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:48.632 --rc genhtml_branch_coverage=1 00:47:48.632 --rc genhtml_function_coverage=1 00:47:48.632 --rc genhtml_legend=1 00:47:48.632 --rc geninfo_all_blocks=1 00:47:48.632 --rc geninfo_unexecuted_blocks=1 00:47:48.632 00:47:48.632 ' 00:47:48.632 05:59:08 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:47:48.632 05:59:08 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:48.632 05:59:08 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:48.632 05:59:08 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:47:48.891 05:59:08 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:48.891 05:59:08 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:48.891 05:59:08 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:48.891 05:59:08 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:48.891 05:59:08 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:48.891 05:59:08 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:48.891 05:59:08 keyring_file -- paths/export.sh@5 -- # export PATH 00:47:48.891 05:59:08 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@51 -- # : 0 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:48.891 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:47:48.891 05:59:08 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:47:48.891 05:59:08 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:47:48.891 05:59:08 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:47:48.891 05:59:08 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:47:48.891 05:59:08 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:47:48.891 05:59:08 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.iluOF7355R 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@733 -- # python - 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.iluOF7355R 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.iluOF7355R 00:47:48.891 05:59:08 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.iluOF7355R 00:47:48.891 05:59:08 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@17 -- # name=key1 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uQIOKkxopD 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:47:48.891 05:59:08 keyring_file -- nvmf/common.sh@733 -- # python - 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uQIOKkxopD 00:47:48.891 05:59:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uQIOKkxopD 00:47:48.891 05:59:08 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.uQIOKkxopD 00:47:48.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:48.891 05:59:08 keyring_file -- keyring/file.sh@30 -- # tgtpid=111178 00:47:48.891 05:59:08 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:48.891 05:59:08 keyring_file -- keyring/file.sh@32 -- # waitforlisten 111178 00:47:48.891 05:59:08 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 111178 ']' 00:47:48.891 05:59:08 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:48.891 05:59:08 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:47:48.891 05:59:08 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:48.891 05:59:08 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:47:48.891 05:59:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:48.891 [2024-11-20 05:59:08.745548] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:47:48.891 [2024-11-20 05:59:08.745895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111178 ] 00:47:49.150 [2024-11-20 05:59:08.897738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:49.150 [2024-11-20 05:59:08.964104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:47:49.409 05:59:09 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:49.409 [2024-11-20 05:59:09.227101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:49.409 null0 00:47:49.409 [2024-11-20 05:59:09.259074] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:49.409 [2024-11-20 05:59:09.259401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:49.409 05:59:09 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:49.409 [2024-11-20 05:59:09.291069] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:47:49.409 2024/11/20 05:59:09 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:47:49.409 request: 00:47:49.409 { 00:47:49.409 "method": "nvmf_subsystem_add_listener", 00:47:49.409 "params": { 00:47:49.409 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:47:49.409 "secure_channel": false, 00:47:49.409 "listen_address": { 00:47:49.409 "trtype": "tcp", 00:47:49.409 "traddr": "127.0.0.1", 00:47:49.409 "trsvcid": "4420" 00:47:49.409 } 00:47:49.409 } 00:47:49.409 } 00:47:49.409 Got JSON-RPC error response 00:47:49.409 GoRPCClient: error on JSON-RPC call 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:47:49.409 05:59:09 keyring_file -- keyring/file.sh@47 -- # bperfpid=111200 00:47:49.409 05:59:09 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:47:49.409 05:59:09 keyring_file -- keyring/file.sh@49 -- # waitforlisten 111200 /var/tmp/bperf.sock 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 111200 ']' 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:49.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:47:49.409 05:59:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:49.668 [2024-11-20 05:59:09.358336] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:47:49.668 [2024-11-20 05:59:09.358916] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111200 ] 00:47:49.668 [2024-11-20 05:59:09.517619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:49.926 [2024-11-20 05:59:09.612226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:50.859 05:59:10 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:47:50.859 05:59:10 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:47:50.859 05:59:10 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iluOF7355R 00:47:50.859 05:59:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iluOF7355R 00:47:50.859 05:59:10 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uQIOKkxopD 00:47:50.859 05:59:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uQIOKkxopD 00:47:51.117 05:59:10 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:47:51.117 05:59:10 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:47:51.117 05:59:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:51.117 05:59:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:51.117 05:59:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:51.376 05:59:11 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.iluOF7355R == \/\t\m\p\/\t\m\p\.\i\l\u\O\F\7\3\5\5\R ]] 00:47:51.376 05:59:11 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:47:51.376 05:59:11 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:47:51.376 05:59:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:51.376 05:59:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:51.376 05:59:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:51.634 05:59:11 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.uQIOKkxopD == \/\t\m\p\/\t\m\p\.\u\Q\I\O\K\k\x\o\p\D ]] 00:47:51.634 05:59:11 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:47:51.634 05:59:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:51.634 05:59:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:51.634 05:59:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:51.634 05:59:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:51.634 05:59:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:51.893 05:59:11 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:47:51.893 05:59:11 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:47:51.893 05:59:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:51.893 05:59:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:51.893 05:59:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:51.893 05:59:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:51.893 05:59:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:52.152 05:59:11 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:47:52.152 05:59:11 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:52.152 05:59:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:52.411 [2024-11-20 05:59:12.204522] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:52.411 nvme0n1 00:47:52.411 05:59:12 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:47:52.411 05:59:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:52.411 05:59:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:52.411 05:59:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:52.411 05:59:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:52.411 05:59:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:52.979 05:59:12 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:47:52.979 05:59:12 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:47:52.979 05:59:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:52.979 05:59:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:52.979 05:59:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:52.979 05:59:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:52.979 05:59:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:53.238 05:59:12 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:47:53.238 05:59:12 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:53.238 Running I/O for 1 seconds... 00:47:54.174 14001.00 IOPS, 54.69 MiB/s 00:47:54.174 Latency(us) 00:47:54.174 [2024-11-20T05:59:14.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:54.174 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:47:54.174 nvme0n1 : 1.01 14056.99 54.91 0.00 0.00 9086.41 3292.40 14792.41 00:47:54.174 [2024-11-20T05:59:14.093Z] =================================================================================================================== 00:47:54.174 [2024-11-20T05:59:14.093Z] Total : 14056.99 54.91 0.00 0.00 9086.41 3292.40 14792.41 00:47:54.174 { 00:47:54.174 "results": [ 00:47:54.174 { 00:47:54.174 "job": "nvme0n1", 00:47:54.174 "core_mask": "0x2", 00:47:54.174 "workload": "randrw", 00:47:54.174 "percentage": 50, 00:47:54.174 "status": "finished", 00:47:54.174 "queue_depth": 128, 00:47:54.174 "io_size": 4096, 00:47:54.174 "runtime": 1.005194, 00:47:54.174 "iops": 14056.988004305636, 00:47:54.174 "mibps": 54.91010939181889, 00:47:54.174 "io_failed": 0, 00:47:54.174 "io_timeout": 0, 00:47:54.174 "avg_latency_us": 9086.408042867253, 00:47:54.174 "min_latency_us": 3292.4038095238097, 00:47:54.174 "max_latency_us": 14792.411428571428 00:47:54.174 } 00:47:54.174 ], 00:47:54.174 "core_count": 1 00:47:54.174 } 00:47:54.174 05:59:14 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:54.174 05:59:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:54.433 05:59:14 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:47:54.433 05:59:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:54.433 05:59:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:54.433 05:59:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:54.433 05:59:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:54.433 05:59:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:54.693 05:59:14 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:47:54.693 05:59:14 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:47:54.693 05:59:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:54.693 05:59:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:54.693 05:59:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:54.693 05:59:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:54.693 05:59:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:54.951 05:59:14 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:47:55.209 05:59:14 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:55.209 05:59:14 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:47:55.209 05:59:14 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:55.209 05:59:14 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:47:55.209 05:59:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:55.209 05:59:14 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:47:55.209 05:59:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:55.209 05:59:14 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:55.209 05:59:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:47:55.209 [2024-11-20 05:59:15.123036] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:55.209 [2024-11-20 05:59:15.123509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2463720 (107): Transport endpoint is not connected 00:47:55.209 [2024-11-20 05:59:15.124492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2463720 (9): Bad file descriptor 00:47:55.209 [2024-11-20 05:59:15.125491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:47:55.209 [2024-11-20 05:59:15.125509] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:55.209 [2024-11-20 05:59:15.125519] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:47:55.209 [2024-11-20 05:59:15.125533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:47:55.467 2024/11/20 05:59:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:47:55.467 request: 00:47:55.467 { 00:47:55.467 "method": "bdev_nvme_attach_controller", 00:47:55.467 "params": { 00:47:55.467 "name": "nvme0", 00:47:55.467 "trtype": "tcp", 00:47:55.467 "traddr": "127.0.0.1", 00:47:55.467 "adrfam": "ipv4", 00:47:55.467 "trsvcid": "4420", 00:47:55.467 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:55.467 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:55.467 "prchk_reftag": false, 00:47:55.467 "prchk_guard": false, 00:47:55.467 "hdgst": false, 00:47:55.467 "ddgst": false, 00:47:55.467 "psk": "key1", 00:47:55.467 "allow_unrecognized_csi": false 00:47:55.467 } 00:47:55.467 } 00:47:55.467 Got JSON-RPC error response 00:47:55.467 GoRPCClient: error on JSON-RPC call 00:47:55.467 05:59:15 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:47:55.467 05:59:15 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:47:55.467 05:59:15 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:47:55.467 05:59:15 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:47:55.467 05:59:15 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:47:55.467 05:59:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:55.467 05:59:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:55.467 05:59:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:55.467 05:59:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:55.467 05:59:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:55.725 05:59:15 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:47:55.725 05:59:15 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:47:55.725 05:59:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:55.725 05:59:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:55.725 05:59:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:55.725 05:59:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:55.725 05:59:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:55.982 05:59:15 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:47:55.983 05:59:15 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:47:55.983 05:59:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:56.241 05:59:15 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:47:56.241 05:59:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:47:56.498 05:59:16 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:47:56.498 05:59:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:56.498 05:59:16 keyring_file -- keyring/file.sh@78 -- # jq length 00:47:56.498 05:59:16 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:47:56.498 05:59:16 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.iluOF7355R 00:47:56.498 05:59:16 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.iluOF7355R 00:47:56.498 05:59:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:47:56.498 05:59:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.iluOF7355R 00:47:56.498 05:59:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:47:56.498 05:59:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:56.498 05:59:16 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:47:56.498 05:59:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:56.498 05:59:16 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iluOF7355R 00:47:56.498 05:59:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iluOF7355R 00:47:56.756 [2024-11-20 05:59:16.604737] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.iluOF7355R': 0100660 00:47:56.756 [2024-11-20 05:59:16.604779] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:47:56.756 2024/11/20 05:59:16 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.iluOF7355R], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:47:56.756 request: 00:47:56.756 { 00:47:56.756 "method": "keyring_file_add_key", 00:47:56.756 "params": { 00:47:56.756 "name": "key0", 00:47:56.756 "path": "/tmp/tmp.iluOF7355R" 00:47:56.756 } 00:47:56.756 } 00:47:56.756 Got JSON-RPC error response 00:47:56.756 GoRPCClient: error on JSON-RPC call 00:47:56.756 05:59:16 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:47:56.756 05:59:16 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:47:56.756 05:59:16 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:47:56.756 05:59:16 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:47:56.756 05:59:16 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.iluOF7355R 00:47:56.756 05:59:16 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iluOF7355R 00:47:56.756 05:59:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iluOF7355R 00:47:57.015 05:59:16 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.iluOF7355R 00:47:57.015 05:59:16 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:47:57.015 05:59:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:57.015 05:59:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:57.015 05:59:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:57.015 05:59:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:57.015 05:59:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:57.593 05:59:17 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:47:57.593 05:59:17 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:57.593 05:59:17 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:47:57.593 05:59:17 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:57.593 05:59:17 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:47:57.593 05:59:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:57.593 05:59:17 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:47:57.593 05:59:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:47:57.593 05:59:17 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:57.593 05:59:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:57.851 [2024-11-20 05:59:17.552929] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.iluOF7355R': No such file or directory 00:47:57.851 [2024-11-20 05:59:17.552977] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:47:57.851 [2024-11-20 05:59:17.552998] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:47:57.851 [2024-11-20 05:59:17.553025] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:47:57.851 [2024-11-20 05:59:17.553036] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:47:57.851 [2024-11-20 05:59:17.553045] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:47:57.851 2024/11/20 05:59:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:47:57.851 request: 00:47:57.851 { 00:47:57.851 "method": "bdev_nvme_attach_controller", 00:47:57.851 "params": { 00:47:57.851 "name": "nvme0", 00:47:57.851 "trtype": "tcp", 00:47:57.851 "traddr": "127.0.0.1", 00:47:57.851 "adrfam": "ipv4", 00:47:57.851 "trsvcid": "4420", 00:47:57.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:57.851 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:57.851 "prchk_reftag": false, 00:47:57.851 "prchk_guard": false, 00:47:57.851 "hdgst": false, 00:47:57.851 "ddgst": false, 00:47:57.851 "psk": "key0", 00:47:57.851 "allow_unrecognized_csi": false 00:47:57.851 } 00:47:57.851 } 00:47:57.851 Got JSON-RPC error response 00:47:57.851 GoRPCClient: error on JSON-RPC call 00:47:57.851 05:59:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:47:57.851 05:59:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:47:57.851 05:59:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:47:57.851 05:59:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:47:57.851 05:59:17 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:47:57.851 05:59:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:58.160 05:59:17 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:47:58.160 05:59:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:58.160 05:59:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:47:58.160 05:59:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:58.160 05:59:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:58.160 05:59:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:58.160 05:59:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zoCe5kcMcg 00:47:58.160 05:59:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:58.160 05:59:17 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:58.160 05:59:17 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:47:58.160 05:59:17 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:58.160 05:59:17 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:47:58.160 05:59:17 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:47:58.160 05:59:17 keyring_file -- nvmf/common.sh@733 -- # python - 00:47:58.160 05:59:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zoCe5kcMcg 00:47:58.160 05:59:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zoCe5kcMcg 00:47:58.160 05:59:17 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.zoCe5kcMcg 00:47:58.160 05:59:17 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zoCe5kcMcg 00:47:58.160 05:59:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zoCe5kcMcg 00:47:58.418 05:59:18 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:58.418 05:59:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:58.675 nvme0n1 00:47:58.675 05:59:18 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:47:58.675 05:59:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:58.675 05:59:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:58.675 05:59:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:58.675 05:59:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:58.675 05:59:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:59.242 05:59:18 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:47:59.242 05:59:18 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:47:59.242 05:59:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:59.504 05:59:19 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:47:59.504 05:59:19 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:47:59.504 05:59:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:59.504 05:59:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:59.504 05:59:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:59.762 05:59:19 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:47:59.762 05:59:19 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:47:59.762 05:59:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:59.762 05:59:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:59.762 05:59:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:59.762 05:59:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:59.762 05:59:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:00.021 05:59:19 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:48:00.022 05:59:19 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:48:00.022 05:59:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:48:00.280 05:59:20 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:48:00.280 05:59:20 keyring_file -- keyring/file.sh@105 -- # jq length 00:48:00.280 05:59:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:00.539 05:59:20 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:48:00.539 05:59:20 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zoCe5kcMcg 00:48:00.539 05:59:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zoCe5kcMcg 00:48:01.106 05:59:20 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uQIOKkxopD 00:48:01.106 05:59:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uQIOKkxopD 00:48:01.366 05:59:21 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:01.366 05:59:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:48:01.625 nvme0n1 00:48:01.625 05:59:21 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:48:01.625 05:59:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:48:01.885 05:59:21 keyring_file -- keyring/file.sh@113 -- # config='{ 00:48:01.885 "subsystems": [ 00:48:01.885 { 00:48:01.885 "subsystem": "keyring", 00:48:01.885 "config": [ 00:48:01.885 { 00:48:01.885 "method": "keyring_file_add_key", 00:48:01.885 "params": { 00:48:01.885 "name": "key0", 00:48:01.885 "path": "/tmp/tmp.zoCe5kcMcg" 00:48:01.885 } 00:48:01.885 }, 00:48:01.885 { 00:48:01.885 "method": "keyring_file_add_key", 00:48:01.885 "params": { 00:48:01.885 "name": "key1", 00:48:01.885 "path": "/tmp/tmp.uQIOKkxopD" 00:48:01.885 } 00:48:01.885 } 00:48:01.885 ] 00:48:01.885 }, 00:48:01.885 { 00:48:01.885 "subsystem": "iobuf", 00:48:01.885 "config": [ 00:48:01.885 { 00:48:01.885 "method": "iobuf_set_options", 00:48:01.885 "params": { 00:48:01.885 "enable_numa": false, 00:48:01.885 "large_bufsize": 135168, 00:48:01.885 "large_pool_count": 1024, 00:48:01.885 "small_bufsize": 8192, 00:48:01.885 "small_pool_count": 8192 00:48:01.885 } 00:48:01.885 } 00:48:01.885 ] 00:48:01.885 }, 00:48:01.885 { 00:48:01.885 "subsystem": "sock", 00:48:01.885 "config": [ 00:48:01.885 { 00:48:01.885 "method": "sock_set_default_impl", 00:48:01.885 "params": { 00:48:01.885 "impl_name": "posix" 00:48:01.885 } 00:48:01.885 }, 00:48:01.885 { 00:48:01.885 "method": "sock_impl_set_options", 00:48:01.885 "params": { 00:48:01.885 "enable_ktls": false, 00:48:01.885 "enable_placement_id": 0, 00:48:01.885 "enable_quickack": false, 00:48:01.885 "enable_recv_pipe": true, 00:48:01.885 "enable_zerocopy_send_client": false, 00:48:01.885 "enable_zerocopy_send_server": true, 00:48:01.885 "impl_name": "ssl", 00:48:01.885 "recv_buf_size": 4096, 00:48:01.885 "send_buf_size": 4096, 00:48:01.885 "tls_version": 0, 00:48:01.885 "zerocopy_threshold": 0 00:48:01.885 } 00:48:01.885 }, 00:48:01.885 { 00:48:01.885 "method": "sock_impl_set_options", 00:48:01.885 "params": { 00:48:01.885 "enable_ktls": false, 00:48:01.885 "enable_placement_id": 0, 00:48:01.885 "enable_quickack": false, 00:48:01.885 "enable_recv_pipe": true, 00:48:01.885 "enable_zerocopy_send_client": false, 00:48:01.885 "enable_zerocopy_send_server": true, 00:48:01.885 "impl_name": "posix", 00:48:01.885 "recv_buf_size": 2097152, 00:48:01.885 "send_buf_size": 2097152, 00:48:01.885 "tls_version": 0, 00:48:01.885 "zerocopy_threshold": 0 00:48:01.885 } 00:48:01.885 } 00:48:01.885 ] 00:48:01.885 }, 00:48:01.885 { 00:48:01.885 "subsystem": "vmd", 00:48:01.885 "config": [] 00:48:01.885 }, 00:48:01.885 { 00:48:01.885 "subsystem": "accel", 00:48:01.885 "config": [ 00:48:01.885 { 00:48:01.885 "method": "accel_set_options", 00:48:01.885 "params": { 00:48:01.885 "buf_count": 2048, 00:48:01.885 "large_cache_size": 16, 00:48:01.885 "sequence_count": 2048, 00:48:01.885 "small_cache_size": 128, 00:48:01.885 "task_count": 2048 00:48:01.885 } 00:48:01.885 } 00:48:01.885 ] 00:48:01.885 }, 00:48:01.885 { 00:48:01.885 "subsystem": "bdev", 00:48:01.885 "config": [ 00:48:01.885 { 00:48:01.885 "method": "bdev_set_options", 00:48:01.885 "params": { 00:48:01.885 "bdev_auto_examine": true, 00:48:01.885 "bdev_io_cache_size": 256, 00:48:01.885 "bdev_io_pool_size": 65535, 00:48:01.885 "iobuf_large_cache_size": 16, 00:48:01.885 "iobuf_small_cache_size": 128 00:48:01.885 } 00:48:01.885 }, 00:48:01.885 { 00:48:01.885 "method": "bdev_raid_set_options", 00:48:01.885 "params": { 00:48:01.885 "process_max_bandwidth_mb_sec": 0, 00:48:01.885 "process_window_size_kb": 1024 00:48:01.885 } 00:48:01.885 }, 00:48:01.885 { 00:48:01.885 "method": "bdev_iscsi_set_options", 00:48:01.885 "params": { 00:48:01.885 "timeout_sec": 30 00:48:01.885 } 00:48:01.885 }, 00:48:01.885 { 00:48:01.885 "method": "bdev_nvme_set_options", 00:48:01.885 "params": { 00:48:01.885 "action_on_timeout": "none", 00:48:01.885 "allow_accel_sequence": false, 00:48:01.885 "arbitration_burst": 0, 00:48:01.885 "bdev_retry_count": 3, 00:48:01.885 "ctrlr_loss_timeout_sec": 0, 00:48:01.885 "delay_cmd_submit": true, 00:48:01.885 "dhchap_dhgroups": [ 00:48:01.885 "null", 00:48:01.885 "ffdhe2048", 00:48:01.885 "ffdhe3072", 00:48:01.885 "ffdhe4096", 00:48:01.885 "ffdhe6144", 00:48:01.885 "ffdhe8192" 00:48:01.885 ], 00:48:01.885 "dhchap_digests": [ 00:48:01.886 "sha256", 00:48:01.886 "sha384", 00:48:01.886 "sha512" 00:48:01.886 ], 00:48:01.886 "disable_auto_failback": false, 00:48:01.886 "fast_io_fail_timeout_sec": 0, 00:48:01.886 "generate_uuids": false, 00:48:01.886 "high_priority_weight": 0, 00:48:01.886 "io_path_stat": false, 00:48:01.886 "io_queue_requests": 512, 00:48:01.886 "keep_alive_timeout_ms": 10000, 00:48:01.886 "low_priority_weight": 0, 00:48:01.886 "medium_priority_weight": 0, 00:48:01.886 "nvme_adminq_poll_period_us": 10000, 00:48:01.886 "nvme_error_stat": false, 00:48:01.886 "nvme_ioq_poll_period_us": 0, 00:48:01.886 "rdma_cm_event_timeout_ms": 0, 00:48:01.886 "rdma_max_cq_size": 0, 00:48:01.886 "rdma_srq_size": 0, 00:48:01.886 "reconnect_delay_sec": 0, 00:48:01.886 "timeout_admin_us": 0, 00:48:01.886 "timeout_us": 0, 00:48:01.886 "transport_ack_timeout": 0, 00:48:01.886 "transport_retry_count": 4, 00:48:01.886 "transport_tos": 0 00:48:01.886 } 00:48:01.886 }, 00:48:01.886 { 00:48:01.886 "method": "bdev_nvme_attach_controller", 00:48:01.886 "params": { 00:48:01.886 "adrfam": "IPv4", 00:48:01.886 "ctrlr_loss_timeout_sec": 0, 00:48:01.886 "ddgst": false, 00:48:01.886 "fast_io_fail_timeout_sec": 0, 00:48:01.886 "hdgst": false, 00:48:01.886 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:01.886 "multipath": "multipath", 00:48:01.886 "name": "nvme0", 00:48:01.886 "prchk_guard": false, 00:48:01.886 "prchk_reftag": false, 00:48:01.886 "psk": "key0", 00:48:01.886 "reconnect_delay_sec": 0, 00:48:01.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:01.886 "traddr": "127.0.0.1", 00:48:01.886 "trsvcid": "4420", 00:48:01.886 "trtype": "TCP" 00:48:01.886 } 00:48:01.886 }, 00:48:01.886 { 00:48:01.886 "method": "bdev_nvme_set_hotplug", 00:48:01.886 "params": { 00:48:01.886 "enable": false, 00:48:01.886 "period_us": 100000 00:48:01.886 } 00:48:01.886 }, 00:48:01.886 { 00:48:01.886 "method": "bdev_wait_for_examine" 00:48:01.886 } 00:48:01.886 ] 00:48:01.886 }, 00:48:01.886 { 00:48:01.886 "subsystem": "nbd", 00:48:01.886 "config": [] 00:48:01.886 } 00:48:01.886 ] 00:48:01.886 }' 00:48:01.886 05:59:21 keyring_file -- keyring/file.sh@115 -- # killprocess 111200 00:48:01.886 05:59:21 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 111200 ']' 00:48:01.886 05:59:21 keyring_file -- common/autotest_common.sh@956 -- # kill -0 111200 00:48:01.886 05:59:21 keyring_file -- common/autotest_common.sh@957 -- # uname 00:48:01.886 05:59:21 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:01.886 05:59:21 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 111200 00:48:01.886 05:59:21 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:48:01.886 killing process with pid 111200 00:48:01.886 05:59:21 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:48:01.886 05:59:21 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 111200' 00:48:01.886 Received shutdown signal, test time was about 1.000000 seconds 00:48:01.886 00:48:01.886 Latency(us) 00:48:01.886 [2024-11-20T05:59:21.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:01.886 [2024-11-20T05:59:21.805Z] =================================================================================================================== 00:48:01.886 [2024-11-20T05:59:21.805Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:01.886 05:59:21 keyring_file -- common/autotest_common.sh@971 -- # kill 111200 00:48:01.886 05:59:21 keyring_file -- common/autotest_common.sh@976 -- # wait 111200 00:48:02.455 05:59:22 keyring_file -- keyring/file.sh@118 -- # bperfpid=111682 00:48:02.455 05:59:22 keyring_file -- keyring/file.sh@120 -- # waitforlisten 111682 /var/tmp/bperf.sock 00:48:02.455 05:59:22 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 111682 ']' 00:48:02.455 05:59:22 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:02.455 05:59:22 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:48:02.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:02.455 05:59:22 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:02.455 05:59:22 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:48:02.455 05:59:22 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:48:02.455 05:59:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:02.455 05:59:22 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:48:02.455 "subsystems": [ 00:48:02.455 { 00:48:02.455 "subsystem": "keyring", 00:48:02.455 "config": [ 00:48:02.455 { 00:48:02.455 "method": "keyring_file_add_key", 00:48:02.455 "params": { 00:48:02.455 "name": "key0", 00:48:02.455 "path": "/tmp/tmp.zoCe5kcMcg" 00:48:02.455 } 00:48:02.455 }, 00:48:02.455 { 00:48:02.456 "method": "keyring_file_add_key", 00:48:02.456 "params": { 00:48:02.456 "name": "key1", 00:48:02.456 "path": "/tmp/tmp.uQIOKkxopD" 00:48:02.456 } 00:48:02.456 } 00:48:02.456 ] 00:48:02.456 }, 00:48:02.456 { 00:48:02.456 "subsystem": "iobuf", 00:48:02.456 "config": [ 00:48:02.456 { 00:48:02.456 "method": "iobuf_set_options", 00:48:02.456 "params": { 00:48:02.456 "enable_numa": false, 00:48:02.456 "large_bufsize": 135168, 00:48:02.456 "large_pool_count": 1024, 00:48:02.456 "small_bufsize": 8192, 00:48:02.456 "small_pool_count": 8192 00:48:02.456 } 00:48:02.456 } 00:48:02.456 ] 00:48:02.456 }, 00:48:02.456 { 00:48:02.456 "subsystem": "sock", 00:48:02.456 "config": [ 00:48:02.456 { 00:48:02.456 "method": "sock_set_default_impl", 00:48:02.456 "params": { 00:48:02.456 "impl_name": "posix" 00:48:02.456 } 00:48:02.456 }, 00:48:02.456 { 00:48:02.456 "method": "sock_impl_set_options", 00:48:02.456 "params": { 00:48:02.456 "enable_ktls": false, 00:48:02.456 "enable_placement_id": 0, 00:48:02.456 "enable_quickack": false, 00:48:02.456 "enable_recv_pipe": true, 00:48:02.456 "enable_zerocopy_send_client": false, 00:48:02.456 "enable_zerocopy_send_server": true, 00:48:02.456 "impl_name": "ssl", 00:48:02.456 "recv_buf_size": 4096, 00:48:02.456 "send_buf_size": 4096, 00:48:02.456 "tls_version": 0, 00:48:02.456 "zerocopy_threshold": 0 00:48:02.456 } 00:48:02.456 }, 00:48:02.456 { 00:48:02.456 "method": "sock_impl_set_options", 00:48:02.456 "params": { 00:48:02.456 "enable_ktls": false, 00:48:02.456 "enable_placement_id": 0, 00:48:02.456 "enable_quickack": false, 00:48:02.456 "enable_recv_pipe": true, 00:48:02.456 "enable_zerocopy_send_client": false, 00:48:02.456 "enable_zerocopy_send_server": true, 00:48:02.456 "impl_name": "posix", 00:48:02.456 "recv_buf_size": 2097152, 00:48:02.456 "send_buf_size": 2097152, 00:48:02.456 "tls_version": 0, 00:48:02.456 "zerocopy_threshold": 0 00:48:02.456 } 00:48:02.456 } 00:48:02.456 ] 00:48:02.456 }, 00:48:02.456 { 00:48:02.456 "subsystem": "vmd", 00:48:02.456 "config": [] 00:48:02.456 }, 00:48:02.456 { 00:48:02.456 "subsystem": "accel", 00:48:02.456 "config": [ 00:48:02.456 { 00:48:02.456 "method": "accel_set_options", 00:48:02.456 "params": { 00:48:02.456 "buf_count": 2048, 00:48:02.456 "large_cache_size": 16, 00:48:02.456 "sequence_count": 2048, 00:48:02.456 "small_cache_size": 128, 00:48:02.456 "task_count": 2048 00:48:02.456 } 00:48:02.456 } 00:48:02.456 ] 00:48:02.456 }, 00:48:02.456 { 00:48:02.456 "subsystem": "bdev", 00:48:02.456 "config": [ 00:48:02.456 { 00:48:02.456 "method": "bdev_set_options", 00:48:02.456 "params": { 00:48:02.456 "bdev_auto_examine": true, 00:48:02.456 "bdev_io_cache_size": 256, 00:48:02.456 "bdev_io_pool_size": 65535, 00:48:02.456 "iobuf_large_cache_size": 16, 00:48:02.456 "iobuf_small_cache_size": 128 00:48:02.456 } 00:48:02.456 }, 00:48:02.456 { 00:48:02.456 "method": "bdev_raid_set_options", 00:48:02.456 "params": { 00:48:02.456 "process_max_bandwidth_mb_sec": 0, 00:48:02.456 "process_window_size_kb": 1024 00:48:02.456 } 00:48:02.456 }, 00:48:02.456 { 00:48:02.456 "method": "bdev_iscsi_set_options", 00:48:02.456 "params": { 00:48:02.456 "timeout_sec": 30 00:48:02.456 } 00:48:02.456 }, 00:48:02.456 { 00:48:02.456 "method": "bdev_nvme_set_options", 00:48:02.456 "params": { 00:48:02.456 "action_on_timeout": "none", 00:48:02.456 "allow_accel_sequence": false, 00:48:02.456 "arbitration_burst": 0, 00:48:02.456 "bdev_retry_count": 3, 00:48:02.456 "ctrlr_loss_timeout_sec": 0, 00:48:02.456 "delay_cmd_submit": true, 00:48:02.456 "dhchap_dhgroups": [ 00:48:02.456 "null", 00:48:02.456 "ffdhe2048", 00:48:02.456 "ffdhe3072", 00:48:02.456 "ffdhe4096", 00:48:02.456 "ffdhe6144", 00:48:02.456 "ffdhe8192" 00:48:02.456 ], 00:48:02.456 "dhchap_digests": [ 00:48:02.456 "sha256", 00:48:02.456 "sha384", 00:48:02.456 "sha512" 00:48:02.456 ], 00:48:02.456 "disable_auto_failback": false, 00:48:02.456 "fast_io_fail_timeout_sec": 0, 00:48:02.456 "generate_uuids": false, 00:48:02.456 "high_priority_weight": 0, 00:48:02.456 "io_path_stat": false, 00:48:02.456 "io_queue_requests": 512, 00:48:02.456 "keep_alive_timeout_ms": 10000, 00:48:02.456 "low_priority_weight": 0, 00:48:02.456 "medium_priority_weight": 0, 00:48:02.456 "nvme_adminq_poll_period_us": 10000, 00:48:02.456 "nvme_error_stat": false, 00:48:02.456 "nvme_ioq_poll_period_us": 0, 00:48:02.456 "rdma_cm_event_timeout_ms": 0, 00:48:02.456 "rdma_max_cq_size": 0, 00:48:02.456 "rdma_srq_size": 0, 00:48:02.456 "reconnect_delay_sec": 0, 00:48:02.456 "timeout_admin_us": 0, 00:48:02.456 "timeout_us": 0, 00:48:02.456 "transport_ack_timeout": 0, 00:48:02.456 "transport_retry_count": 4, 00:48:02.456 "transport_tos": 0 00:48:02.456 } 00:48:02.456 }, 00:48:02.456 { 00:48:02.456 "method": "bdev_nvme_attach_controller", 00:48:02.456 "params": { 00:48:02.456 "adrfam": "IPv4", 00:48:02.456 "ctrlr_loss_timeout_sec": 0, 00:48:02.456 "ddgst": false, 00:48:02.456 "fast_io_fail_timeout_sec": 0, 00:48:02.456 "hdgst": false, 00:48:02.456 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:02.456 "multipath": "multipath", 00:48:02.456 "name": "nvme0", 00:48:02.456 "prchk_guard": false, 00:48:02.456 "prchk_reftag": false, 00:48:02.456 "psk": "key0", 00:48:02.456 "reconnect_delay_sec": 0, 00:48:02.456 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:02.456 "traddr": "127.0.0.1", 00:48:02.456 "trsvcid": "4420", 00:48:02.456 "trtype": "TCP" 00:48:02.456 } 00:48:02.456 }, 00:48:02.456 { 00:48:02.456 "method": "bdev_nvme_set_hotplug", 00:48:02.456 "params": { 00:48:02.456 "enable": false, 00:48:02.456 "period_us": 100000 00:48:02.456 } 00:48:02.456 }, 00:48:02.456 { 00:48:02.456 "method": "bdev_wait_for_examine" 00:48:02.456 } 00:48:02.456 ] 00:48:02.456 }, 00:48:02.456 { 00:48:02.456 "subsystem": "nbd", 00:48:02.456 "config": [] 00:48:02.456 } 00:48:02.456 ] 00:48:02.456 }' 00:48:02.456 [2024-11-20 05:59:22.145542] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:48:02.456 [2024-11-20 05:59:22.145675] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111682 ] 00:48:02.456 [2024-11-20 05:59:22.299610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:02.715 [2024-11-20 05:59:22.384596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:02.715 [2024-11-20 05:59:22.614234] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:03.283 05:59:23 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:48:03.283 05:59:23 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:48:03.283 05:59:23 keyring_file -- keyring/file.sh@121 -- # jq length 00:48:03.283 05:59:23 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:48:03.283 05:59:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:03.541 05:59:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:48:03.541 05:59:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:48:03.541 05:59:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:03.541 05:59:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:48:03.541 05:59:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:48:03.542 05:59:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:03.542 05:59:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:03.800 05:59:23 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:48:04.059 05:59:23 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:48:04.059 05:59:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:48:04.059 05:59:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:48:04.059 05:59:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:04.060 05:59:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:04.060 05:59:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:48:04.318 05:59:24 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:48:04.318 05:59:24 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:48:04.318 05:59:24 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:48:04.318 05:59:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:48:04.577 05:59:24 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:48:04.577 05:59:24 keyring_file -- keyring/file.sh@1 -- # cleanup 00:48:04.577 05:59:24 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.zoCe5kcMcg /tmp/tmp.uQIOKkxopD 00:48:04.577 05:59:24 keyring_file -- keyring/file.sh@20 -- # killprocess 111682 00:48:04.577 05:59:24 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 111682 ']' 00:48:04.577 05:59:24 keyring_file -- common/autotest_common.sh@956 -- # kill -0 111682 00:48:04.577 05:59:24 keyring_file -- common/autotest_common.sh@957 -- # uname 00:48:04.577 05:59:24 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:04.577 05:59:24 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 111682 00:48:04.577 killing process with pid 111682 00:48:04.577 Received shutdown signal, test time was about 1.000000 seconds 00:48:04.577 00:48:04.577 Latency(us) 00:48:04.577 [2024-11-20T05:59:24.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:04.577 [2024-11-20T05:59:24.496Z] =================================================================================================================== 00:48:04.577 [2024-11-20T05:59:24.496Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:48:04.577 05:59:24 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:48:04.577 05:59:24 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:48:04.577 05:59:24 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 111682' 00:48:04.577 05:59:24 keyring_file -- common/autotest_common.sh@971 -- # kill 111682 00:48:04.577 05:59:24 keyring_file -- common/autotest_common.sh@976 -- # wait 111682 00:48:04.836 05:59:24 keyring_file -- keyring/file.sh@21 -- # killprocess 111178 00:48:04.836 05:59:24 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 111178 ']' 00:48:04.836 05:59:24 keyring_file -- common/autotest_common.sh@956 -- # kill -0 111178 00:48:04.836 05:59:24 keyring_file -- common/autotest_common.sh@957 -- # uname 00:48:04.836 05:59:24 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:04.836 05:59:24 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 111178 00:48:04.836 killing process with pid 111178 00:48:04.836 05:59:24 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:48:04.836 05:59:24 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:48:04.836 05:59:24 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 111178' 00:48:04.836 05:59:24 keyring_file -- common/autotest_common.sh@971 -- # kill 111178 00:48:04.836 05:59:24 keyring_file -- common/autotest_common.sh@976 -- # wait 111178 00:48:05.095 00:48:05.095 real 0m16.660s 00:48:05.095 user 0m41.273s 00:48:05.095 sys 0m4.197s 00:48:05.095 05:59:24 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:48:05.095 ************************************ 00:48:05.095 END TEST keyring_file 00:48:05.095 ************************************ 00:48:05.095 05:59:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:48:05.354 05:59:25 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:48:05.354 05:59:25 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:48:05.354 05:59:25 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:48:05.354 05:59:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:48:05.354 05:59:25 -- common/autotest_common.sh@10 -- # set +x 00:48:05.354 ************************************ 00:48:05.354 START TEST keyring_linux 00:48:05.354 ************************************ 00:48:05.354 05:59:25 keyring_linux -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:48:05.354 Joined session keyring: 112756883 00:48:05.354 * Looking for test storage... 00:48:05.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:48:05.354 05:59:25 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:48:05.354 05:59:25 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:48:05.354 05:59:25 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:48:05.354 05:59:25 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@345 -- # : 1 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:05.354 05:59:25 keyring_linux -- scripts/common.sh@368 -- # return 0 00:48:05.354 05:59:25 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:05.354 05:59:25 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:48:05.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:05.354 --rc genhtml_branch_coverage=1 00:48:05.354 --rc genhtml_function_coverage=1 00:48:05.354 --rc genhtml_legend=1 00:48:05.354 --rc geninfo_all_blocks=1 00:48:05.354 --rc geninfo_unexecuted_blocks=1 00:48:05.354 00:48:05.354 ' 00:48:05.354 05:59:25 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:48:05.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:05.354 --rc genhtml_branch_coverage=1 00:48:05.354 --rc genhtml_function_coverage=1 00:48:05.354 --rc genhtml_legend=1 00:48:05.354 --rc geninfo_all_blocks=1 00:48:05.354 --rc geninfo_unexecuted_blocks=1 00:48:05.354 00:48:05.354 ' 00:48:05.354 05:59:25 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:48:05.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:05.354 --rc genhtml_branch_coverage=1 00:48:05.354 --rc genhtml_function_coverage=1 00:48:05.354 --rc genhtml_legend=1 00:48:05.355 --rc geninfo_all_blocks=1 00:48:05.355 --rc geninfo_unexecuted_blocks=1 00:48:05.355 00:48:05.355 ' 00:48:05.355 05:59:25 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:48:05.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:05.355 --rc genhtml_branch_coverage=1 00:48:05.355 --rc genhtml_function_coverage=1 00:48:05.355 --rc genhtml_legend=1 00:48:05.355 --rc geninfo_all_blocks=1 00:48:05.355 --rc geninfo_unexecuted_blocks=1 00:48:05.355 00:48:05.355 ' 00:48:05.355 05:59:25 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:48:05.355 05:59:25 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=9b54f445-f067-44f7-9ac3-ecdb7acf7203 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:05.355 05:59:25 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:48:05.355 05:59:25 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:05.355 05:59:25 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:05.355 05:59:25 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:05.355 05:59:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:05.355 05:59:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:05.355 05:59:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:05.355 05:59:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:48:05.355 05:59:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:48:05.355 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:48:05.355 05:59:25 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:48:05.614 05:59:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:48:05.614 05:59:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:48:05.614 05:59:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:48:05.614 05:59:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:48:05.614 05:59:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:48:05.614 05:59:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:48:05.614 05:59:25 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:48:05.614 05:59:25 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:48:05.614 05:59:25 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:05.614 05:59:25 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:48:05.614 05:59:25 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:48:05.614 05:59:25 keyring_linux -- nvmf/common.sh@733 -- # python - 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:48:05.614 /tmp/:spdk-test:key0 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:48:05.614 05:59:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:48:05.614 05:59:25 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:48:05.614 05:59:25 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:48:05.614 05:59:25 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:48:05.614 05:59:25 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:48:05.614 05:59:25 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:48:05.614 05:59:25 keyring_linux -- nvmf/common.sh@733 -- # python - 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:48:05.614 05:59:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:48:05.614 /tmp/:spdk-test:key1 00:48:05.614 05:59:25 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=111845 00:48:05.614 05:59:25 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:05.614 05:59:25 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 111845 00:48:05.614 05:59:25 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 111845 ']' 00:48:05.614 05:59:25 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:05.614 05:59:25 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:48:05.614 05:59:25 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:05.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:05.614 05:59:25 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:48:05.614 05:59:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:05.614 [2024-11-20 05:59:25.450523] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:48:05.614 [2024-11-20 05:59:25.450638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111845 ] 00:48:05.886 [2024-11-20 05:59:25.608635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:05.886 [2024-11-20 05:59:25.673142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:06.492 05:59:26 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:48:06.492 05:59:26 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:48:06.492 05:59:26 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:48:06.492 05:59:26 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:06.492 05:59:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:06.492 [2024-11-20 05:59:26.401921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:06.751 null0 00:48:06.751 [2024-11-20 05:59:26.433912] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:48:06.751 [2024-11-20 05:59:26.434206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:48:06.751 05:59:26 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:06.751 05:59:26 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:48:06.751 1032111656 00:48:06.751 05:59:26 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:48:06.751 1054839788 00:48:06.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:48:06.751 05:59:26 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=111886 00:48:06.751 05:59:26 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:48:06.751 05:59:26 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 111886 /var/tmp/bperf.sock 00:48:06.751 05:59:26 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 111886 ']' 00:48:06.751 05:59:26 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:48:06.751 05:59:26 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:48:06.751 05:59:26 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:48:06.751 05:59:26 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:48:06.751 05:59:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:06.751 [2024-11-20 05:59:26.506016] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:48:06.751 [2024-11-20 05:59:26.506409] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111886 ] 00:48:06.751 [2024-11-20 05:59:26.650676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:07.008 [2024-11-20 05:59:26.714034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:07.943 05:59:27 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:48:07.943 05:59:27 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:48:07.943 05:59:27 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:48:07.943 05:59:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:48:07.943 05:59:27 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:48:07.943 05:59:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:48:08.202 05:59:28 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:48:08.202 05:59:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:48:08.461 [2024-11-20 05:59:28.282395] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:48:08.461 nvme0n1 00:48:08.719 05:59:28 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:48:08.720 05:59:28 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:48:08.720 05:59:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:48:08.720 05:59:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:48:08.720 05:59:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:08.720 05:59:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:48:08.978 05:59:28 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:48:08.978 05:59:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:48:08.978 05:59:28 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:48:08.978 05:59:28 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:48:08.978 05:59:28 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:48:08.978 05:59:28 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:48:08.978 05:59:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:09.236 05:59:28 keyring_linux -- keyring/linux.sh@25 -- # sn=1032111656 00:48:09.236 05:59:28 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:48:09.237 05:59:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:48:09.237 05:59:28 keyring_linux -- keyring/linux.sh@26 -- # [[ 1032111656 == \1\0\3\2\1\1\1\6\5\6 ]] 00:48:09.237 05:59:28 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1032111656 00:48:09.237 05:59:28 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:48:09.237 05:59:28 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:48:09.237 Running I/O for 1 seconds... 00:48:10.611 17556.00 IOPS, 68.58 MiB/s 00:48:10.611 Latency(us) 00:48:10.611 [2024-11-20T05:59:30.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:10.611 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:48:10.611 nvme0n1 : 1.01 17549.33 68.55 0.00 0.00 7264.14 2449.80 9112.62 00:48:10.611 [2024-11-20T05:59:30.530Z] =================================================================================================================== 00:48:10.611 [2024-11-20T05:59:30.530Z] Total : 17549.33 68.55 0.00 0.00 7264.14 2449.80 9112.62 00:48:10.611 { 00:48:10.611 "results": [ 00:48:10.611 { 00:48:10.611 "job": "nvme0n1", 00:48:10.611 "core_mask": "0x2", 00:48:10.611 "workload": "randread", 00:48:10.611 "status": "finished", 00:48:10.611 "queue_depth": 128, 00:48:10.611 "io_size": 4096, 00:48:10.611 "runtime": 1.007731, 00:48:10.611 "iops": 17549.326159461205, 00:48:10.611 "mibps": 68.55205531039533, 00:48:10.611 "io_failed": 0, 00:48:10.611 "io_timeout": 0, 00:48:10.611 "avg_latency_us": 7264.137135102386, 00:48:10.611 "min_latency_us": 2449.7980952380954, 00:48:10.611 "max_latency_us": 9112.624761904763 00:48:10.611 } 00:48:10.611 ], 00:48:10.611 "core_count": 1 00:48:10.611 } 00:48:10.611 05:59:30 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:48:10.611 05:59:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:48:10.611 05:59:30 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:48:10.611 05:59:30 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:48:10.611 05:59:30 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:48:10.611 05:59:30 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:48:10.611 05:59:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:48:10.611 05:59:30 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:48:10.870 05:59:30 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:48:10.870 05:59:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:48:10.870 05:59:30 keyring_linux -- keyring/linux.sh@23 -- # return 00:48:10.870 05:59:30 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:10.870 05:59:30 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:48:10.870 05:59:30 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:10.870 05:59:30 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:48:10.870 05:59:30 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:10.870 05:59:30 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:48:10.870 05:59:30 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:48:10.870 05:59:30 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:10.870 05:59:30 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:48:11.129 [2024-11-20 05:59:31.006363] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:48:11.129 [2024-11-20 05:59:31.007151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1f6a0 (107): Transport endpoint is not connected 00:48:11.129 [2024-11-20 05:59:31.008089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1f6a0 (9): Bad file descriptor 00:48:11.129 [2024-11-20 05:59:31.009084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:48:11.129 [2024-11-20 05:59:31.009132] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:48:11.129 [2024-11-20 05:59:31.009154] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:48:11.129 [2024-11-20 05:59:31.009179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:48:11.129 2024/11/20 05:59:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:48:11.129 request: 00:48:11.129 { 00:48:11.129 "method": "bdev_nvme_attach_controller", 00:48:11.129 "params": { 00:48:11.129 "name": "nvme0", 00:48:11.129 "trtype": "tcp", 00:48:11.129 "traddr": "127.0.0.1", 00:48:11.129 "adrfam": "ipv4", 00:48:11.129 "trsvcid": "4420", 00:48:11.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:11.129 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:11.129 "prchk_reftag": false, 00:48:11.129 "prchk_guard": false, 00:48:11.129 "hdgst": false, 00:48:11.129 "ddgst": false, 00:48:11.129 "psk": ":spdk-test:key1", 00:48:11.129 "allow_unrecognized_csi": false 00:48:11.129 } 00:48:11.129 } 00:48:11.129 Got JSON-RPC error response 00:48:11.129 GoRPCClient: error on JSON-RPC call 00:48:11.129 05:59:31 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:48:11.129 05:59:31 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:48:11.129 05:59:31 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:48:11.129 05:59:31 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:48:11.129 05:59:31 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:48:11.129 05:59:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:48:11.129 05:59:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:48:11.129 05:59:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:48:11.129 05:59:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:48:11.129 05:59:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:48:11.129 05:59:31 keyring_linux -- keyring/linux.sh@33 -- # sn=1032111656 00:48:11.129 05:59:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1032111656 00:48:11.386 1 links removed 00:48:11.386 05:59:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:48:11.386 05:59:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:48:11.386 05:59:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:48:11.386 05:59:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:48:11.386 05:59:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:48:11.386 05:59:31 keyring_linux -- keyring/linux.sh@33 -- # sn=1054839788 00:48:11.386 05:59:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1054839788 00:48:11.386 1 links removed 00:48:11.386 05:59:31 keyring_linux -- keyring/linux.sh@41 -- # killprocess 111886 00:48:11.386 05:59:31 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 111886 ']' 00:48:11.386 05:59:31 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 111886 00:48:11.386 05:59:31 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:48:11.386 05:59:31 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:11.386 05:59:31 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 111886 00:48:11.386 05:59:31 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:48:11.386 05:59:31 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:48:11.386 killing process with pid 111886 00:48:11.386 05:59:31 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 111886' 00:48:11.386 05:59:31 keyring_linux -- common/autotest_common.sh@971 -- # kill 111886 00:48:11.386 Received shutdown signal, test time was about 1.000000 seconds 00:48:11.386 00:48:11.386 Latency(us) 00:48:11.386 [2024-11-20T05:59:31.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:11.386 [2024-11-20T05:59:31.305Z] =================================================================================================================== 00:48:11.386 [2024-11-20T05:59:31.305Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:48:11.386 05:59:31 keyring_linux -- common/autotest_common.sh@976 -- # wait 111886 00:48:11.644 05:59:31 keyring_linux -- keyring/linux.sh@42 -- # killprocess 111845 00:48:11.644 05:59:31 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 111845 ']' 00:48:11.644 05:59:31 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 111845 00:48:11.644 05:59:31 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:48:11.644 05:59:31 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:11.644 05:59:31 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 111845 00:48:11.644 05:59:31 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:48:11.644 05:59:31 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:48:11.644 05:59:31 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 111845' 00:48:11.644 killing process with pid 111845 00:48:11.644 05:59:31 keyring_linux -- common/autotest_common.sh@971 -- # kill 111845 00:48:11.644 05:59:31 keyring_linux -- common/autotest_common.sh@976 -- # wait 111845 00:48:11.903 00:48:11.903 real 0m6.736s 00:48:11.903 user 0m12.875s 00:48:11.903 sys 0m1.914s 00:48:11.903 05:59:31 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:48:11.903 05:59:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:48:11.903 ************************************ 00:48:11.903 END TEST keyring_linux 00:48:11.903 ************************************ 00:48:11.903 05:59:31 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:48:11.903 05:59:31 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:48:11.903 05:59:31 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:48:11.903 05:59:31 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:48:11.903 05:59:31 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:48:11.903 05:59:31 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:48:11.903 05:59:31 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:48:11.903 05:59:31 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:48:11.903 05:59:31 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:48:11.903 05:59:31 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:48:11.903 05:59:31 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:48:11.903 05:59:31 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:48:11.903 05:59:31 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:48:11.903 05:59:31 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:48:11.903 05:59:31 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:48:11.903 05:59:31 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:48:11.903 05:59:31 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:48:11.903 05:59:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:48:11.903 05:59:31 -- common/autotest_common.sh@10 -- # set +x 00:48:11.903 05:59:31 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:48:11.903 05:59:31 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:48:11.903 05:59:31 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:48:11.903 05:59:31 -- common/autotest_common.sh@10 -- # set +x 00:48:14.508 INFO: APP EXITING 00:48:14.508 INFO: killing all VMs 00:48:14.508 INFO: killing vhost app 00:48:14.508 INFO: EXIT DONE 00:48:15.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:48:15.074 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:48:15.074 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:48:16.010 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:48:16.010 Cleaning 00:48:16.010 Removing: /var/run/dpdk/spdk0/config 00:48:16.010 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:48:16.010 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:48:16.010 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:48:16.010 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:48:16.010 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:48:16.010 Removing: /var/run/dpdk/spdk0/hugepage_info 00:48:16.010 Removing: /var/run/dpdk/spdk1/config 00:48:16.010 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:48:16.010 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:48:16.010 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:48:16.010 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:48:16.010 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:48:16.010 Removing: /var/run/dpdk/spdk1/hugepage_info 00:48:16.010 Removing: /var/run/dpdk/spdk2/config 00:48:16.010 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:48:16.010 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:48:16.010 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:48:16.010 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:48:16.010 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:48:16.010 Removing: /var/run/dpdk/spdk2/hugepage_info 00:48:16.010 Removing: /var/run/dpdk/spdk3/config 00:48:16.010 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:48:16.010 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:48:16.010 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:48:16.010 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:48:16.010 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:48:16.010 Removing: /var/run/dpdk/spdk3/hugepage_info 00:48:16.010 Removing: /var/run/dpdk/spdk4/config 00:48:16.010 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:48:16.010 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:48:16.010 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:48:16.010 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:48:16.010 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:48:16.010 Removing: /var/run/dpdk/spdk4/hugepage_info 00:48:16.010 Removing: /dev/shm/nvmf_trace.0 00:48:16.010 Removing: /dev/shm/spdk_tgt_trace.pid58733 00:48:16.010 Removing: /var/run/dpdk/spdk0 00:48:16.010 Removing: /var/run/dpdk/spdk1 00:48:16.010 Removing: /var/run/dpdk/spdk2 00:48:16.010 Removing: /var/run/dpdk/spdk3 00:48:16.010 Removing: /var/run/dpdk/spdk4 00:48:16.010 Removing: /var/run/dpdk/spdk_pid101427 00:48:16.010 Removing: /var/run/dpdk/spdk_pid101467 00:48:16.010 Removing: /var/run/dpdk/spdk_pid101828 00:48:16.010 Removing: /var/run/dpdk/spdk_pid101880 00:48:16.010 Removing: /var/run/dpdk/spdk_pid102298 00:48:16.010 Removing: /var/run/dpdk/spdk_pid102869 00:48:16.010 Removing: /var/run/dpdk/spdk_pid103305 00:48:16.010 Removing: /var/run/dpdk/spdk_pid104359 00:48:16.270 Removing: /var/run/dpdk/spdk_pid105429 00:48:16.270 Removing: /var/run/dpdk/spdk_pid105541 00:48:16.270 Removing: /var/run/dpdk/spdk_pid105609 00:48:16.270 Removing: /var/run/dpdk/spdk_pid107220 00:48:16.270 Removing: /var/run/dpdk/spdk_pid107548 00:48:16.270 Removing: /var/run/dpdk/spdk_pid107884 00:48:16.270 Removing: /var/run/dpdk/spdk_pid108480 00:48:16.270 Removing: /var/run/dpdk/spdk_pid108485 00:48:16.270 Removing: /var/run/dpdk/spdk_pid108896 00:48:16.270 Removing: /var/run/dpdk/spdk_pid109056 00:48:16.270 Removing: /var/run/dpdk/spdk_pid109215 00:48:16.270 Removing: /var/run/dpdk/spdk_pid109312 00:48:16.270 Removing: /var/run/dpdk/spdk_pid109472 00:48:16.270 Removing: /var/run/dpdk/spdk_pid109581 00:48:16.270 Removing: /var/run/dpdk/spdk_pid110313 00:48:16.270 Removing: /var/run/dpdk/spdk_pid110348 00:48:16.270 Removing: /var/run/dpdk/spdk_pid110387 00:48:16.270 Removing: /var/run/dpdk/spdk_pid110642 00:48:16.270 Removing: /var/run/dpdk/spdk_pid110673 00:48:16.270 Removing: /var/run/dpdk/spdk_pid110703 00:48:16.270 Removing: /var/run/dpdk/spdk_pid111178 00:48:16.270 Removing: /var/run/dpdk/spdk_pid111200 00:48:16.270 Removing: /var/run/dpdk/spdk_pid111682 00:48:16.270 Removing: /var/run/dpdk/spdk_pid111845 00:48:16.270 Removing: /var/run/dpdk/spdk_pid111886 00:48:16.270 Removing: /var/run/dpdk/spdk_pid58575 00:48:16.270 Removing: /var/run/dpdk/spdk_pid58733 00:48:16.270 Removing: /var/run/dpdk/spdk_pid58989 00:48:16.270 Removing: /var/run/dpdk/spdk_pid59081 00:48:16.270 Removing: /var/run/dpdk/spdk_pid59107 00:48:16.270 Removing: /var/run/dpdk/spdk_pid59217 00:48:16.270 Removing: /var/run/dpdk/spdk_pid59247 00:48:16.270 Removing: /var/run/dpdk/spdk_pid59392 00:48:16.270 Removing: /var/run/dpdk/spdk_pid59677 00:48:16.270 Removing: /var/run/dpdk/spdk_pid59861 00:48:16.270 Removing: /var/run/dpdk/spdk_pid59951 00:48:16.270 Removing: /var/run/dpdk/spdk_pid60051 00:48:16.270 Removing: /var/run/dpdk/spdk_pid60154 00:48:16.270 Removing: /var/run/dpdk/spdk_pid60187 00:48:16.270 Removing: /var/run/dpdk/spdk_pid60229 00:48:16.270 Removing: /var/run/dpdk/spdk_pid60293 00:48:16.270 Removing: /var/run/dpdk/spdk_pid60416 00:48:16.270 Removing: /var/run/dpdk/spdk_pid61046 00:48:16.270 Removing: /var/run/dpdk/spdk_pid61116 00:48:16.270 Removing: /var/run/dpdk/spdk_pid61185 00:48:16.270 Removing: /var/run/dpdk/spdk_pid61213 00:48:16.270 Removing: /var/run/dpdk/spdk_pid61303 00:48:16.270 Removing: /var/run/dpdk/spdk_pid61331 00:48:16.270 Removing: /var/run/dpdk/spdk_pid61412 00:48:16.270 Removing: /var/run/dpdk/spdk_pid61432 00:48:16.270 Removing: /var/run/dpdk/spdk_pid61482 00:48:16.270 Removing: /var/run/dpdk/spdk_pid61500 00:48:16.270 Removing: /var/run/dpdk/spdk_pid61546 00:48:16.270 Removing: /var/run/dpdk/spdk_pid61568 00:48:16.270 Removing: /var/run/dpdk/spdk_pid61717 00:48:16.270 Removing: /var/run/dpdk/spdk_pid61758 00:48:16.270 Removing: /var/run/dpdk/spdk_pid61837 00:48:16.270 Removing: /var/run/dpdk/spdk_pid62341 00:48:16.270 Removing: /var/run/dpdk/spdk_pid62722 00:48:16.270 Removing: /var/run/dpdk/spdk_pid65238 00:48:16.270 Removing: /var/run/dpdk/spdk_pid65284 00:48:16.270 Removing: /var/run/dpdk/spdk_pid65634 00:48:16.270 Removing: /var/run/dpdk/spdk_pid65684 00:48:16.270 Removing: /var/run/dpdk/spdk_pid66101 00:48:16.270 Removing: /var/run/dpdk/spdk_pid66682 00:48:16.270 Removing: /var/run/dpdk/spdk_pid67108 00:48:16.270 Removing: /var/run/dpdk/spdk_pid68144 00:48:16.270 Removing: /var/run/dpdk/spdk_pid69221 00:48:16.270 Removing: /var/run/dpdk/spdk_pid69343 00:48:16.270 Removing: /var/run/dpdk/spdk_pid69406 00:48:16.534 Removing: /var/run/dpdk/spdk_pid71028 00:48:16.534 Removing: /var/run/dpdk/spdk_pid71371 00:48:16.534 Removing: /var/run/dpdk/spdk_pid75257 00:48:16.535 Removing: /var/run/dpdk/spdk_pid75673 00:48:16.535 Removing: /var/run/dpdk/spdk_pid76314 00:48:16.535 Removing: /var/run/dpdk/spdk_pid76860 00:48:16.535 Removing: /var/run/dpdk/spdk_pid82607 00:48:16.535 Removing: /var/run/dpdk/spdk_pid83112 00:48:16.535 Removing: /var/run/dpdk/spdk_pid83215 00:48:16.535 Removing: /var/run/dpdk/spdk_pid83379 00:48:16.535 Removing: /var/run/dpdk/spdk_pid83419 00:48:16.535 Removing: /var/run/dpdk/spdk_pid83458 00:48:16.535 Removing: /var/run/dpdk/spdk_pid83511 00:48:16.535 Removing: /var/run/dpdk/spdk_pid83690 00:48:16.535 Removing: /var/run/dpdk/spdk_pid83849 00:48:16.535 Removing: /var/run/dpdk/spdk_pid84133 00:48:16.535 Removing: /var/run/dpdk/spdk_pid84268 00:48:16.535 Removing: /var/run/dpdk/spdk_pid84533 00:48:16.535 Removing: /var/run/dpdk/spdk_pid84642 00:48:16.535 Removing: /var/run/dpdk/spdk_pid84777 00:48:16.535 Removing: /var/run/dpdk/spdk_pid85177 00:48:16.535 Removing: /var/run/dpdk/spdk_pid85641 00:48:16.535 Removing: /var/run/dpdk/spdk_pid85642 00:48:16.535 Removing: /var/run/dpdk/spdk_pid85643 00:48:16.535 Removing: /var/run/dpdk/spdk_pid85915 00:48:16.535 Removing: /var/run/dpdk/spdk_pid86182 00:48:16.535 Removing: /var/run/dpdk/spdk_pid86600 00:48:16.535 Removing: /var/run/dpdk/spdk_pid86923 00:48:16.535 Removing: /var/run/dpdk/spdk_pid87520 00:48:16.535 Removing: /var/run/dpdk/spdk_pid87522 00:48:16.535 Removing: /var/run/dpdk/spdk_pid87913 00:48:16.535 Removing: /var/run/dpdk/spdk_pid87938 00:48:16.535 Removing: /var/run/dpdk/spdk_pid87952 00:48:16.535 Removing: /var/run/dpdk/spdk_pid87977 00:48:16.535 Removing: /var/run/dpdk/spdk_pid87990 00:48:16.535 Removing: /var/run/dpdk/spdk_pid88383 00:48:16.535 Removing: /var/run/dpdk/spdk_pid88432 00:48:16.535 Removing: /var/run/dpdk/spdk_pid88827 00:48:16.535 Removing: /var/run/dpdk/spdk_pid89074 00:48:16.535 Removing: /var/run/dpdk/spdk_pid89605 00:48:16.535 Removing: /var/run/dpdk/spdk_pid90222 00:48:16.535 Removing: /var/run/dpdk/spdk_pid91626 00:48:16.535 Removing: /var/run/dpdk/spdk_pid92276 00:48:16.535 Removing: /var/run/dpdk/spdk_pid92278 00:48:16.535 Removing: /var/run/dpdk/spdk_pid94319 00:48:16.535 Removing: /var/run/dpdk/spdk_pid94390 00:48:16.536 Removing: /var/run/dpdk/spdk_pid94467 00:48:16.536 Removing: /var/run/dpdk/spdk_pid94558 00:48:16.536 Removing: /var/run/dpdk/spdk_pid94701 00:48:16.536 Removing: /var/run/dpdk/spdk_pid94790 00:48:16.536 Removing: /var/run/dpdk/spdk_pid94862 00:48:16.536 Removing: /var/run/dpdk/spdk_pid94933 00:48:16.536 Removing: /var/run/dpdk/spdk_pid95332 00:48:16.536 Removing: /var/run/dpdk/spdk_pid96115 00:48:16.536 Removing: /var/run/dpdk/spdk_pid97519 00:48:16.536 Removing: /var/run/dpdk/spdk_pid97712 00:48:16.536 Removing: /var/run/dpdk/spdk_pid97998 00:48:16.536 Removing: /var/run/dpdk/spdk_pid98549 00:48:16.536 Removing: /var/run/dpdk/spdk_pid98950 00:48:16.536 Clean 00:48:16.798 05:59:36 -- common/autotest_common.sh@1451 -- # return 0 00:48:16.798 05:59:36 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:48:16.798 05:59:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:48:16.798 05:59:36 -- common/autotest_common.sh@10 -- # set +x 00:48:16.798 05:59:36 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:48:16.798 05:59:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:48:16.798 05:59:36 -- common/autotest_common.sh@10 -- # set +x 00:48:16.798 05:59:36 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:48:16.798 05:59:36 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:48:16.798 05:59:36 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:48:16.798 05:59:36 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:48:16.798 05:59:36 -- spdk/autotest.sh@394 -- # hostname 00:48:16.798 05:59:36 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:48:17.057 geninfo: WARNING: invalid characters removed from testname! 00:48:43.604 06:00:02 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:46.894 06:00:06 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:49.430 06:00:08 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:51.961 06:00:11 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:53.864 06:00:13 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:56.396 06:00:16 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:58.925 06:00:18 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:48:58.925 06:00:18 -- spdk/autorun.sh@1 -- $ timing_finish 00:48:58.925 06:00:18 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:48:58.925 06:00:18 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:48:58.925 06:00:18 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:48:58.925 06:00:18 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:48:58.925 + [[ -n 5266 ]] 00:48:58.925 + sudo kill 5266 00:48:58.933 [Pipeline] } 00:48:58.948 [Pipeline] // timeout 00:48:58.953 [Pipeline] } 00:48:58.968 [Pipeline] // stage 00:48:58.972 [Pipeline] } 00:48:58.987 [Pipeline] // catchError 00:48:58.997 [Pipeline] stage 00:48:58.999 [Pipeline] { (Stop VM) 00:48:59.011 [Pipeline] sh 00:48:59.286 + vagrant halt 00:49:03.461 ==> default: Halting domain... 00:49:10.051 [Pipeline] sh 00:49:10.329 + vagrant destroy -f 00:49:14.522 ==> default: Removing domain... 00:49:14.532 [Pipeline] sh 00:49:14.806 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:49:14.814 [Pipeline] } 00:49:14.826 [Pipeline] // stage 00:49:14.829 [Pipeline] } 00:49:14.840 [Pipeline] // dir 00:49:14.844 [Pipeline] } 00:49:14.857 [Pipeline] // wrap 00:49:14.863 [Pipeline] } 00:49:14.874 [Pipeline] // catchError 00:49:14.883 [Pipeline] stage 00:49:14.884 [Pipeline] { (Epilogue) 00:49:14.895 [Pipeline] sh 00:49:15.169 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:49:21.734 [Pipeline] catchError 00:49:21.736 [Pipeline] { 00:49:21.747 [Pipeline] sh 00:49:22.033 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:49:22.290 Artifacts sizes are good 00:49:22.300 [Pipeline] } 00:49:22.314 [Pipeline] // catchError 00:49:22.323 [Pipeline] archiveArtifacts 00:49:22.328 Archiving artifacts 00:49:22.422 [Pipeline] cleanWs 00:49:22.432 [WS-CLEANUP] Deleting project workspace... 00:49:22.433 [WS-CLEANUP] Deferred wipeout is used... 00:49:22.437 [WS-CLEANUP] done 00:49:22.438 [Pipeline] } 00:49:22.451 [Pipeline] // stage 00:49:22.454 [Pipeline] } 00:49:22.465 [Pipeline] // node 00:49:22.468 [Pipeline] End of Pipeline 00:49:22.493 Finished: SUCCESS